text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Le mardi 11 août 2009 à 13:03 -0700, Russ Allbery a écrit : > * These packages are normal Debian packages with normal package metadata, > but will generally have a symlink in /usr/share/doc/<package> pointing > to the package for which they provide debugging information. Actually I don’t see the point in this symlink. It only makes things more complicated, especially if there is no one-to-one mapping between ddebs and debs. > Open questions: > > * Can we limit this package namespace to *only* detached debugging > symbols, not all the other sorts of debugging packages that people > create with special compiler options or optional code features? I think we should. The purpose of the proposal is to automate as much as possible, not to open a new section where anyone can dump anything. > * What about contrib and non-free packages? Do they just lose here? How about yes? > * Can we require a one-to-one correspondance between binary package names > and debug package names that provide symbols for that binary package? I > think we should; I think it would make the system more straightforward.. Using a single ddeb for the source package avoids such issues. Alternatives imply to generate automatically lots of Replaces and Conflicts fields, and this is just too fragile. The consensus on #debian-dak when we discussed this specific issue was to use one ddeb for each source package by default, and to let the door open to the maintainer overriding this default with several ddebs in a source, using a new header in the control file. This way we can keep things as simple as possible, without losing the possibility to handle corner cases that will arise.?= | https://lists.debian.org/debian-devel/2009/08/msg00414.html | CC-MAIN-2015-22 | refinedweb | 280 | 61.77 |
It depends on what you mean by default.
As we saw earlier, the convention for Windows header files is that if you don't specify a particular version, then you get the most recent version. The shell common controls header file follows this convention, so if you include the Windows XP version of
commctrl.h, you get functions, messages, and structures designed for use with version 6 of the common controls. (And functions, messages, and structures may not work with version 5 of the shell common controls due to changes in structure sizes, for example.) So from the Windows XP Platform SDK header file's point of view, the default version of the shell common controls is version 6.
On the other hand, there's the question of what version of the shell common controls you actually get at run time. Prior to Windows XP, the answer was simple: You got the most recent version installed on the machine.
With Windows XP, however, the rules changed. The visuals team wanted to do something more ambitious with the common controls, but the compatibility constraints also created significant risk. The solution was to use side-by-side assemblies.
For compatibility, if a program didn't specify what version of the shell common controls it wanted, it got version 5.82, which was carefully designed for extremely high compatibility with the previous version, 5.81, which came with Windows 2000 and Windows Me. Now, version 5.82 is not completely identical to 5.81, because it also needs to interoperate with version 6. More on this later.
If a program wanted to use version 6 of the common controls, it had to say so explicitly in a manifest. (What we on the shell team informally call a "v6 manifest".) That way, only programs that asked for the new behavior got it. The theory being that if you asked for the new behavior, you presumably tested your program against version 6 of the common controls to verify that it behaves as you expected. This freed up the visuals team to make more substantial changes to the common controls without having to worry about some old program that relied on some strange undocumented behavior of the common controls. That old program would get version 5.82, which was designed for high compatibility.
Now, on that interoperability thing. There are places where the common controls library creates an object which you can then use with other common controls. For example, you can create an image list with
ImageList_Create and then use that image list in a list view or tree view. Care had to be taken so that an image list created by version 5 of the common controls (a "v5 image list") could be used by a list view created by version 6 (a "v6 list view"), or conversely that a v6 image list could be used in a v5 list view. This sort of cross-version image list usage is actually quite common: Any application that calls
Shell_GetImageLists (or its old-fashioned equivalent,
SHGetFileInfo with the
SHGFI_SYSICONINDEX flag) will get a v6 image list. If that application uses version 5 of the common controls (because it doesn't have a v6 manifest), then it will find itself using a v6 image list inside a v5 list view. Since each DLL has its own manifest, you can quickly find yourself in a case where there is a hodgepodge of v5 and v6 components all inside a single process, and they all have to work with each other.
Another example of this cross-version interoperability is the
HPROPSHEETPAGE. Property sheet pages created with
CreatePropSheetPage from one version of the shell common controls had to work with the
PropertySheet function of the other version. This happens a lot with shell property sheet extensions. The shell namespace will ask the shell extensions to provide their custom property sheets, and all the ones written for Windows 2000 will hand back a v5
HPROPSHEETPAGE. But Explorer is going to display that property sheet with the v6
PropertySheet function. That v5 property sheet page had better work even when hosted inside a v6 property sheet.
Okay, but back to the original problem. If you don't specify what version of the header file you want, then you get the latest version (version 6 if you got the header file from the Windows XP Platform SDK). On the other hand, if you don't specify what version of the DLL you want, you get version 5.82, the compatible version of the DLL. Yes, this is a mismatch. Be on the lookout. This is what happens when a header file convention is at odds with a compatibility decision.
Meow.
Is 6 supposed to be backwards-compatible with 5.82? I’ve had problems with using 5.82 structures with 6. It is understandable if it isn’t, since I need a manifest to get 6.
Also, what’s the common controls version in Vista? Is it still 6?
By the way, I have to applaud the plunge you guys took regarding using SxS dlls for v5 vs. v6. I especially respect the “isolation aware” stuff, as it allowed DLL’s in our tools to use v6 features, while letting us take our time on transitioning everything else to v6.
I have to wonder, management-wise, how hard of a problem was the v5/v6 interop? Was it something that really manifested as an issue late in development, or was the full scope pretty much known at the beginning?
[v5/v6 interop was a huge undertaking. Your last question presents a false dichotomy. We knew up front that it would be a lot of work, but that doesn’t mean we understood the full scope up front. Do you ever know the full scope of a large undertaking before you begin? There are always unexpected problems that arise. -Raymond]
Oh, I get that, I was just curious the level of planning that goes into an undertaking like this. Like was it “well, we’ll need some interop” or was it down to the nitty gritty of “property pages will need to be cross compatible” ? Also, was the dll-isolation-support planned from the get-go, so something that came of incompatibilities after implementation began?
To me, it could just be an interesting case-study into planning-and-consequences of things at Microsoft. And since this is relatively recent, it seems a bit more relevant given the maturity of the platform as compared to design decisions on Win16 or Win32. I’m not trying to grill you, but it seems to be one of the more impressive and risky gambits given the huge size of windows, and a peephole as to how it was pulled off so well could be a decent learning experience.
[I can’t tell whether you’re asking me to write up the case study or you’re just speculating out loud. -Raymond]
If there is a case study in the offing that would be worth the price of addmission.
Of course, when the price of admission is nothing, anything at all (including nothing itself) is worth the price of admission… usually considerably more.
I use SysInternal/Microsoft ‘s listdll to check which dll is actually loaded at runtime.
[I can’t tell whether you’re asking me to write up the case study or you’re just speculating out loud. -Raymond]
Haha, I was asking _you_. :)
Raymond I’m sure he wasn’t asking you to write a white paper, he is,
like myself, just curious about how things like this work.
Unfortunatly there arn’t many people out there with this kind of
knowledge! Anyway just an idea for a future blog post if you feel
so inclined :)
I’d have to interview people. It’s not like I attend every meeting and
read every spec. A white paper is a lot more work than a blog post.
White papers take weeks to write, review, and approve. A blog post doesn’t even have to be true! -Raymond]
I have enough spare time to write a white paper for free. Of
course the paper has to be white to start with. Guess what colour
it will be when I’m done.
Anyway Jack Mathews was teasing you. He even wrote “haha” when teasing. I can’t believe I had to write this.
Speak of the devil.
> The solution was to use side-by-side assemblies.
"Some people, faced with a problem, say ‘I know what, I’ll use regular expressions. Then they have two problems."
s/regular expressions/side-by-side/g
> you can quickly find yourself in a case where
> there is a hodgepodge of v5 and v6 components
> all inside a single process, and they all have
> to work with each other
Then maybe that’s the only case where they really work. I do have to congratulate you on getting it working.
DLL Hell .Net is worse than DLL Hell Classic.[*]
[* Yeah this is the fourth time I’ve posted this in two days, but it’s still true.]
[And people think I’m silly for thinking that comments are directed at me. It seems rather presumptuous, don’t you think, to expect somebody to write a white paper in their spare time just because you asked them to? -Raymond]
Well I’m not holding my breath, but I at least wanted the request out there. At the very least, it may be something to expand on in further blog posts if you have the time. I would hope I’m speaking for a decent amount of people that finds this sort of talk interesting. Most of us don’t have the opportunity to work on projects with an impact on literally tens of millions of people.
That’s actually one of the more interesting threads on this blog for me, is seeing the butterfly effect of a lot of seemingly small changes/change requests. Most of the posts, though, are more in the vein of “we didn’t do this because,” so it’s a nice insight to see “we knew this would hurt, but we did it anyway.”
I’m not teasing anyone. I thought the reply was funny, because yeah, re-reading it did make it just seem like pointless out-loud speculation.
What are the implications of using run-time linking with LoadLibrary (with and without manifest) to get comctl32? Because there’s no version number in LoadLibrary(TEXT(“COMCTL32.DLL”)). | https://blogs.msdn.microsoft.com/oldnewthing/20070412-00/?p=27263 | CC-MAIN-2018-05 | refinedweb | 1,760 | 72.16 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
8
results of 8
howdy y'all-
i was out of town for a few days last week, but i've now found the time
to release the latest batch of plugin updates. this batch includes new
versions of the Console, FTP, and SQL plugins.
* Console 3.0.3: there was no way to insert a literal $ in the command
line; fixed a bug in the console tool bar's handling of history; fixed
with bugs with error matcher handling; "Console To Front" command
didn't work; now includes Ant build file; requires jEdit 3.2pre9,
EditBus 1.1, ErrorList 1.1.1, and JDK 1.1
* FTP 0.3: soft links were not resolved properly; file names with spaces
in them were not supported; made file list parsing code more
compatible with various servers; open from FTP Server" command didn't
select the newly opened buffer; file permissions are now preserved
when a file is being saved; code cleanups; documentation is now in
DocBook-XML format; requires jEdit 2.7pre2 and JDK 1.1
* SQL 0.8: dependancy on CommonControls 0.1 - now plugin uses
illustrious HelpfulJTable; classpath for JDBC drivers can be set in
the "Options" dialog (now you don't have to move your JDBC drivers
into jEdit directories); minor bugfixes, docs updated; requires jEdit
2.7pre2, CommonControls 0.1, EditBus 0.9.4, XML 0.3, and JDK 1.2
-md
hey all-
this afternoon, a few plugins that have ended up taking me a little
longer than usual have at last made their way onto Plugin Central. go
get 'em if you're a Python hacker or a XInsert user.
* CommonControls 0.1: new plugin maintained by Sergey Udaltsov with
code contributions from others, including Dirk Moebius; provides a
common set of enhanced Swing components intended for use by other
jEdit plugins; requires jEdit 2.7pre2 and JDK 1.1
* JythonInterpreter 0.5: several bug fixes; completely changed jython
plugin support; added jython distribution to package; requires jEdit
3.1final, EditBus 1.0, and JDK 1.3
* PyUtils 0.1: new plugin by Ollie Rutherfurd and Carlos Quiroz;
contains utilities for Python development in jEdit, including
PyBrowse, a python class browser; requires jEdit 3.1final,
JythonInterpreter 0.5, and JDK 1.3
* XInsert 1.7: fixes bug #441894 ("XInsert alters tree UI elements");
updated for jEdit 3.2.2; requires jEdit 3.2.2 and JDK 1.2
-md
jEdit 3.2.2 is now available from <>.
+ New Features
- JHTML syntax highlighting (Will Sargent)
- RelationalView syntax highlighting (Will Sargent)
- clipboardHangWorkaround.bsh startup script included. If your JVM
hangs when jEdit tries to copy or paste, take a look at this script.
+ Enhancements
- Eliminated delay when showing check box menu items for the first time
- JARClassLoader is much faster now
- BeanShell performance tweak: loaded classes are cached in the
namespace which contained the 'import' that resolved them
- BeanShell performance tweak: default imports (java.lang, java.awt,
etc) are only loaded into the top-level namespace, and due to the
above change, classes found in the default imported packages are
cached and only ever have to be searched for once
- New version of jEditLauncher (John Gellene)
- Whan user elects to retain command line parameters, an upgrade
always updates path to jar file.
- Location of server file set from -user.home value defined on
command line if jEdit -settings parameter is not used.
- Eliminates dependence on Windows Sockets 2.0; version 1.1 is
sufficient.
- Installer module now writes detailed log to install.log in jEdit
installation directory.
+ Bug Fixes
- 'Welcome' link in help didn't work
- Fixed another grab key dialog box bug
- Fixed minor slowdown when making changes in the first or last three
lines of a buffer
- On Windows, javax.swing.filechooser.FileSystemView.isHiddenFile()
always returns false. So now we attempt calling the File.isHidden()
method via reflection to find out if a file is hidden or not.
- File permissions were not preserved on MacOS X, because FileVFS did
not detect if it was running on MacOS X properly
- 'Stop' button in plugin list download progress should now work under
all Java versions
+ API Changes
- New MiscUtilities.isToolsJarAvailable() method, attempts to load the
JDK tools.jar, returns true if successful (Dirk Moebius)
howdy-
this evening, i uploaded updates of three plugins and the initial
release of another. if you install XML 0.4 or JTidyPlugin 1.0, you'll
need to remove all versions of the HTML plugin (as well as Tidy.jar if
it's still around).
* AntFarm 0.5.1: added threading so AntFarm does not block the AWT
during extended Ant tasks (Kyle F. Downing); improved message when
tools.jar is not found; updated button images, in line with the rest
of jEdit's UI; requires jEdit 3.2pre10, EditBus 1.0.1, ErrorList 1.1,
XML 0.3, and JDK 1.1
* Clipper 0.9.2: integrated 'Go to Clipper' macro; updated for jEdit
3.2; requires jEdit 3.2final and JDK 1.1
* JTidyPlugin 1.0: initial Plugin Central release; this plugin is the
JTidy portion of the HTML (which is now deprecated); the tag-related
actions of the HTML plugin are now part of the XML plugin; requires
jEdit 3.2.1, EditBus 1.0.1, and JDK 1.1
* XML 0.4: bug fixes; can now use catalog files in XML Catalog format
or OASIS SOCAT format to resolve system and public IDs; new "XML
Insert" window lists declared elements and entities (loaded from the
DTD for XML files, a built-in list for HTML files); "Edit Tag" dialog
box for graphically editing tag attributes; tag and entity completion,
invoked when < or & is pressed; tag highlighting moved from HTML
plugin; tag-related commands moved from HTML plugin; and more...; HTML
plugin is not compatible with this plugin and must be removed;
requires jEdit 3.2.1, EditBus 1.0.1, ErrorList 1.1, and JDK 1.1
-md
hello-
this afternoon, five updated plugins have been released on Plugin
Central. BufferTabs 0.7.5, Console 3.0.2, ContextHelp 1.4, Sessions
0.7.1, and SQL 0.6.
* BufferTabs 0.7.5: fixed bug #437925 and its duplicates which happened
with the first versions of startup.bsh or the latest pre releases of
jEdit 3.2; BufferTabs are no longer enabled by default after the first
installation; requires jEdit 3.2final and JDK 1.1
* Console 3.0.2: environement variables disabled on Windows; bug fixes;
documentation updates; requires jEdit 3.2pre9, EditBus 1.1, ErrorList
1.1.1, and JDK 1.1
* ContextHelp 1.4: uses the path for the autosave file instead of the
disk file if the file is currently "dirty"; now contains a build.xml
file; requires jEdit 2.7pre2 and JDK 1.1
* Sessions 0.7.1: the Sessions toolbar can now be added to jEdit's main
toolbar, saving screen space; requires jEdit 3.2.1 and JDK 1.1
* SQL 0.6: new logo instead of "Oracle Plugin" (Do you like it? If no -
contribute your ideas); more sophisticated OracleVFS; support for DB2
and PostgreSQL (great thanks to Carmine Lucarelli and Pierluigi
Mangani); some bugfixes; renamed from SQLPlugin to SQL (the Plugin
Manager "upgrade" function will not work; you should remove SQLPlugin
or OraclePlugin and install SQL 0.6); requires jEdit 2.7pre2, EditBus
0.9.4, XML 0.3, and JDK 1.2
-md
jEdit 3.2.1 is now available from <>.
+ New Features
- 'Color Picker' macro added
- Objective-C syntax highlighting (Kris Kopicki)
+ Enhancements
- Updated PV-WAVE edit mode (Ed Stewart)
- Updated 'Write HyperSearch Results' macro (John Gellene)
- Updated jEditLauncher (John Gellene)
+ Bug Fixes
- Clicks in table of contents in help viewer didn't work
- A few actions didn't record a trailing ; when recording a macro
- Fixed another BeanShell performance problem
- Fixed a few minor problems if a plugin threw an exception in its
start() method (Dirk Moebius)
- MRJ on MacOS 8/9 returns an os.name of "Mac OS", but jEdit was
checking for "MacOS".
- Fixed minor grab key dialog box bug
+ API Changes
- View.getToolBar() method added
- Status bar change necessary for upcoming Vimulator plugin (Mike Dillon) | http://sourceforge.net/p/jedit/mailman/jedit-announce/?viewmonth=200109 | CC-MAIN-2015-18 | refinedweb | 1,390 | 68.87 |
Code: Select all
import dfs
image = dfs.ssd()
image.set_title('Cool stuff')
image.add_file('$.!BOOT', "*BASIC\rCHAIN "MENU"\r", load=0x0000, exec=0x0000, locked=False)
image.add_file("$.MENU", open("menu.bas", "rb").read(), load=0x0000, exec=0x0000, locked=False)
open('myprog.ssd', 'wb').write(image.get())
I started knocking my own up, then realised how much I hated doing it.
I found which looks as though I could turn it into what I want, but it's not directly applicable as it's more of a command line tool. So before I start with that, I thought I'd ask if anyone has already done this.
I really need this to be under some kind of open source licence as well, I'm afraid. I'm building a command line driver for PLASMA and I'd like it to be able to take PLASMA source as input and emit an emulator-ready SSD at the other end, so I'd want to put the code in the PLASMA repository on github.
Sorry if this already exists and my searching powers were weak...
ETA: Not sure if this should be in the "utilities" forum instead, I'll leave it up to the moderators to advise... | http://www.stardot.org.uk/forums/viewtopic.php?f=55&t=13524&sid=1ac0299d9476098d7745b4f1c19b880f | CC-MAIN-2018-22 | refinedweb | 204 | 72.56 |
I'm trying to make a basic window with the text "t" inside using Tkinter, however when running the code the shell spits out "NameError: name 'Label' is not defined". I'm running Python 3.5.2.
I followed the tutorials but the problem is in the
label = Label(root, text="test")
import tkinter
root = tkinter.Tk()
sheight = root.winfo_screenheight()
swidth = root.winfo_screenwidth()
root.minsize(width=swidth, height=sheight)
root.maxsize(width=swidth, height=sheight)
label = Label(root, text="test")
label1.pack()
root = mainloop()
You never imported the
Label class. Try
tkinter.Label
Check the import statements for those tutorials
Maybe they imply
from tkinter import * | https://codedump.io/share/qEsboFi2mito/1/labels-not-defined-in-tkinter-app | CC-MAIN-2017-51 | refinedweb | 106 | 53.47 |
Solution 1
class Point(object): def __init__(self, x, y): self.x = x self.y = y class Line(object): def __init__(self, p1, p2): self.p1 = p1 self.p2 = p2 def slope(self): return (self.p2.y - self.p1.y) / (self.p2.x - self.p1.x) def y_intercept(self): m = self.slope() return self.p1.y - (m * self.p1.x) def formula(self): tpl = 'y = {m}x + {b:g}' m = self.slope() if m == 1: m = '' return tpl.format(m=m, b=self.y_intercept())
Points and Lines
Time to make things a bit more complicated. Now you need to write two classes!
Create a class
Point that receives two parameters
x and
y and stores them within each point created.
Next create a class
Line that receives two points as parameters
p1 and
p2.
It needs the following 3 methods:
slope that returns the slope of the line based on the two points
Equation: (y2 - y1)/(x2 - x1)
y_intercept that returns the y-intercept of the line
Equation: y1 - (slope * x1)
*
formula that returns a string of the formula of the line (if the slope is 1, omit it).
Equation: y = mx + b where m is slope and b is y-intercept
Note: None of these methods receive any external parameters
For the
formula string, if the y-intercept can be truncated (e.g. using 1 instead of 1.0), do it using {:g} in your string formatting.
"{b:g}".format(b=3.0) # 3 "{b:g}".format(b=3.2) # 3.2
Example:
p1 = Point(0, 1) p2 = Point(1, 2) l = Line(p1, p2) l.slope() # 1 l.y_intercept() # 1 l.formula() # 'y = x + 1.0' | https://learn.rmotr.com/python/base-python-track/intro-to-oop/points-and-lines | CC-MAIN-2018-47 | refinedweb | 278 | 70.09 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hello,
I'm looking to create a custom scripted field called 'Initiative End Date'. It would show on the Epic level the 'End Date' field value of the parent Initiative, which is accessible through the 'Parent Link' field that comes with Jira Portfolio.
My end goal here is JQL queries that show me when the Epic end date is later than the Initiative end date so we can adjust accordingly, and the only way I can think to do this is use Scriptrunner to pull this in as a field on the same (Epic) level, and then do a datecompare. If there are other ways, I'm open.
There are similar requests out there to pull in fields through Epic Link or subtask, but no idea where to start or the code regarding Parent Link.
Any help you can provide is much appreciated. Thank you!
-Anthony
This is the script I came up with for a scripted field:("Initiative End Date")
def endDateValue = parentIssue.getCustomFieldValue(endDate)
return endDateValue
}
Joshua, thank you for the quick response!! I'm getting a few errors when I try to put in that code.. Can you let me know what you think? Am I missing a declaration? I'm on Jira 7.3
Hi Anthony,
The static type checker for ScriptRunner can be wrong. Sometimes, it will tell you a matching method cannot be found even if it does exist. I think that's happening here. Try to just ignore the errors, save the script field, and go visit an Epic issue to see if anything is being displayed for the field.
No luck... I don't see an error, but the field isn't bringing back dates using existing or new Epics. When I preview using a new Epic I just created (associated to an initiative with end dates), the Preview comes back as 'null' (no result, null log)
I was able to find an error:
2017-08-16 14:58:26,009 ERROR [customfield.GroovyCustomField]: *************************************************************************************
2017-08-16 14:58:26,009 ERROR [customfield.GroovyCustomField]: Script field failed on issue: LIC-1724, field: Initiative End Date
java.lang.NullPointerException: Cannot get property 'key' on null object
at Script147.run(Script147.groovy:7)
I think I slightly misread your question. I just realized you wanted the value of the "End Date" field on the Epic. The name of this scripted field will be "End Date Initiative".
So we should change the script to get the value of "End Date":("End Date")
def endDateValue = parentIssue.getCustomFieldValue(endDate)
return endDateValue
}
Hi Joshua!
Thank you! So I actually caught that yesterday and changed it to End Date... but I figured out what I did wrong.
the field I needed is called 'End date' with a lowercase 'd', and in an odd coincidence a plugin I installed on Tuesday auto-created a competing custom field called 'End Date' with a capital D... so it was not exactly an error, is was a legit Null value.
So this now brings back the date!! One issue though is still happening.
It's pulling in the date in this format: 2017-07-21 00:00:00.0
Which is not working in a datecompare function, bringing back this error:
Field name: Initiative End Date not found or not a date or datetime.
I realized this was because the Script Template was set to Text Field, however when I changed it, none of the other options render the date.
Text Field: 2017-07-21 00:00:00.0
Date Time Picker: Invalid date
Absolute Date Time: $datePickerFormatter.format($value)
Is there something I can put in 'Custom' to read it as a date, etc...?
Update! I changed the script field to be use the Search Template to 'Date Time Range picker' and the Template as Date Time Picker, and it now looks like
Initiative End Date: 21/Jul/17 12:00 AM
When I run a the date compare I still get this message
Initiative End Date not found or not a date or datetime.
and when I run the field as a preview, get this error:
The indexer for this field expects a java.util.Date but the script returned a java.sql.Timestamp
Can you advise what options I should try?
Thank you again for all your help!
I'm looking for the opposite. In groovy script how can I query child issues (from a parent that is defined via portfolio). I'll be able to figure out the rest once I can get a Collection<Issue> object. Getting the custom field "Child issues" returns a null object =-(
I was able to solve my issue, but in a very messy way. Essentially I needed to inject a JQL call into the code to get the code. There are a lot of imports I'm not listing here:
String jqlSearch = "issueFunction in portfolioChildrenOf(\"key = " + issue.getKey() + "\")"
SearchService searchService = ComponentAccessor.getComponent(SearchService.class)
ApplicationUser user = ComponentAccessor.getJiraAuthenticationContext().getLoggedInUser()
List<Issue> issues = null
IssueManager issueManager = ComponentAccessor.getIssueManager()
SearchService.ParseResult parseResult = searchService.parseQuery(user, jqlSearch)
if (parseResult.isValid()) {
def searchResult = searchService.search(user, parseResult.getQuery(), PagerFilter.getUnlimitedFilter())
// Transform issues from DocumentIssueImpl to the "pure" form IssueImpl (some methods don't work with DocumentIssueImps)
issues = searchResult.getIssues() // This only returns the first page of results of 100 issues...I think, would an initiative ever have more than 100 issues?
} else {
log.error("Invalid JQL: " + jqlSearch);
return "error in JQL search"
}
This only returns the first "page" of issues (epics in an initiative for my purposes), but it works.
Solved above two errors in my code by changing
def value = issue.getCustomFieldValue(parentLink)
to
String value = issue.getCustomFieldValue(parentLink)
and
def parentIssue = ComponentAccessor.issueManager.getIssueByCurrentKey(parentKey)
to
def parentIssue = ComponentAccessor.getIssueManager().getIssueByCurrentKey. | https://community.atlassian.com/t5/Jira-questions/How-can-I-create-a-Scripted-Field-to-pull-date-from-Parent-Link/qaq-p/625755 | CC-MAIN-2019-09 | refinedweb | 978 | 57.37 |
QTSerialPort read/write
- desperatenewbie
Hi,
I'm using QT 4.8 with qtserialport and i am having some trouble communicating with the device. Here is my simple main function
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
foreach (const QSerialPortInfo &info, QSerialPortInfo::availablePorts()) {
qDebug() << "Name : " << info.portName();
qDebug() << "Description : " << info.description();
qDebug() << "Manufacturer: " << info.manufacturer();
// Example use QSerialPort QSerialPort serial; serial.setPort(info); serial.setBaudRate(QSerialPort::Baud2400); serial.setDataBits(QSerialPort::Data8); serial.setParity(QSerialPort::NoParity); serial.setFlowControl(QSerialPort::SoftwareControl); if (serial.open(QIODevice::ReadWrite)){ qDebug()<<"Opened successfully"; } serial.write("PSN"); if (!serial.waitForReadyRead(1000)) { qDebug()<<"failed"; } qDebug()<<serial.readAll(); serial.close(); } return a.exec();
}
PSN should return the product serial number and the device.
i keep getting "failed" as an output and readAll() gives me "" in the console. Can anybody give me a piece of code I can use to make it work without redirecting me to other posts or examples that came with qtserialport, i wasted hours there.
Reading/Writing to the device in MATLAB and putty works as expected. Its just in QT that i get no response.
Thanks
Hi and welcome to devnet
this code
qDebug() << QByteArray("\0 1 2 3 4 5 6 7");
produces
"".
You must check if the received buffer contains or begins with
0x00
check with
qDebug() << serial.bytesAvailable(); qDebug() << serial.readAll().size();
if the buffer is not empty
- desperatenewbie
@mcosta Thanks for the help.
qDebug() << serial.bytesAvailable();
qDebug() << serial.readAll().size();
both output 0 which means the buffer is empty.
From past experience, the buffer remains empty for about 60-90 ms after a command is sent, i believe waiting for 1000ms using the waitForReadyRead(1000) would have given enough time for the buffer to fill.
Is the problem thus with the .write() function?
Sorry,
I don't have so much experience with QtSerialPort. but if
waitForReadyReadreturns true means something is arrived on the port
You didn't mention what device you are trying to talk to...
I'm wondering if PSN is the command to get the product serial number but AFTER that command you need to send a CR or LF? So something like:
serial.write("PSN\r");
or
serial.write("PSN");
serial.putChar( 0x13 );
The reason I bring this up is in putty you may have a CR going out without knowing it.
The other thing you might try is connecting up the serial port readyRead() signal to a slot and see if it ever gets called. This is an indication data is coming in.
I have never tried using QSerialPort outside of the message loop (i.e. 'a.exec()' from main() in your program). I do get the impression that it relies on the message loop to function properly so this test may not work.
If you modify your test program a bit I suspect it will work:
- Create a class derived from QWidget
- Add a 'Test' button with associated slot
- Put all your test code in the test slot.
In Qt4 you had to use the external QSerialDevice class (this, or some variation of it, was migrated into Qt5 but didn't exist internally as part of Qt4). There was something about the open command mode options that I ran into, I don't remember the details unfortunately. There is a namespace called 'AbstractSerial' with various options, at least in the version I have. You might want to use this instead:
if (serial.open(AbstractSerial::ReadWrite)){ qDebug()<<"Opened successfully"; } | https://forum.qt.io/topic/52780/qtserialport-read-write | CC-MAIN-2018-26 | refinedweb | 574 | 50.02 |
Everyone who is anyone seems to be messing about with Flickr these days, one of the newest waves of "Web 2.0" Web-based applications. With its now much imitated style, social networking orientated layout, and use of the newest AJAX technologies to provide the user with a truly interactive experience, Flickr sets the bar for the future of applications on the Internet. Also one of the key benchmarks of a "Web 2.0" application (sorry, that’s the last time I'll use that word, honestly) is the public API that has been delivered and can be used by you and me to provide new and interesting ways of displaying and manipulating photos.
The first thing I will mention is that this article is aimed at .NET 2.0, but the Flickr.Net API library works just as well in .NET 1.1, so don't let that stop you. I'm also assuming that you know how to write applications in .NET, either using Visual Studio 2005 or one of the free Visual Studio 2005 Express editions. I will be focusing mainly on Windows Forms applications, but again the Flickr.Net API library can be used just as well for Web applications, including those hosted in a Medium Trust environment (see my Web site for more details on Medium Trust).
I'm also assuming you are familiar with how to use Flickr and have an account there. If not, go GET ONE!
To get started you will need to get an API Key for use with Flickr. You apply for new keys and manage your keys from the Your Keys section of the Flickr Services Web site at. Applying for a new key should be a pretty instantaneous affair. Once you get your API Key, you can return to the same page and click the Edit Configuration link. This will allow you to edit the description of the use of your API Key and show you your Shared Secret, which you will need if you want to perform authenticated requests to Flickr.
When running the examples below, you will have to replace the example API key with your own.
The Flickr.Net API Library can be downloaded from the Flickr.Net Web site. There is also a forum for posting your Flickr.Net specific questions (if they relate to the Flickr API in general you are better posting them on the Flickr API mailing list).
The download contains the source code for the library, as well as debug and release compiled DLLs. For the purposes of this article, we will use the release DLL, but you are free to add the source code project to your solution directly.
The Flickr API has a number of concepts and naming conventions that it would be useful to go over at this point.
A user is uniquely identified by their User ID. This will look something like "40123132@N01." Where the Username is referred to, this is the same as your Screen name — the name that is displayed on your home page and which you can change at will. This should never be cached by your application because it can be changed at any time.
The final term is your Alias or the part of your Flickr URL that identified you. When you create an account this will be the same as your User ID (this URL will always work); you can specify a more user-friendly format for this URL once only.
When you authenticate a user, Flickr gives you a Token. This token is unique for your API key, the user who you have authenticated, and the permissions they have given you (for example, read only or write access). This token will remain valid as long as the user does not revoke your authentication, and you should ideally cache this within your application.
Once authenticated, calls to Flickr will respond as if sent by that user and will show photos they have permission to see that others may not, such as private photos or their friends' and family's photos that have been marked for their eyes only. You will also be able to modify their photos, and upload new photos into their account.
Without authentication you can still do quite a lot of things, such as view all public photos, search for photos for users or by tag, and browse public groups.
When you perform a search on Flickr it often will not return all the photos in one go, but page the results. For example, if a search would usually return 1000 photos, it will return the details of the first 100 photos, then the next 100, and so on, for 10 pages. You can often specify the size of the page (the number of photos to return each time) and the page number to return; for example, if you set the page size (or PerPage property) to 500 you will only get 2 pages returned. In most cases 500 is the maximum, but check the Flickr API documentation for whatever the given method supports.
First, let's create a simple Flickr object, passing in your API key obtained above.
Visual C#
string apikey = "1234abcd1234abcd";Flickr flickr = new Flickr(apikey);
Visual Basic
Dim apikey As String = "1234abcd1234abcd"Dim f As Flickr = New Flickr(apikey)
From this instance of the Flickr object you can perform searches, browse groups, find users, and perform the steps required to authenticate a user (provided you provide the Shared Secret as well).
The following line of code searches all Flickr photos for photos with the tag "microsoft":
PhotoSearchOptions searchOptions = new PhotoSearchOptions();searchOptions.Tags = "microsoft";Photos microsoftPhotos = flickr.PhotosSearch(searchOptions);
Dim searchOptions As PhotoSearchOptions = New PhotoSearchOptions()searchOptions.Tags = "microsoft"Dim microsoftPhotos As Photos = f.PhotosSearch(searchOptions)
The PhotoSearchOptions class is the easiest and most flexible way to search for photos. Many of its properties are optional, but it covers all the options available to the complex search function, including UserId, Tags, and SortOrder.
The Photos class contains a selection of properties, some of which may not be immediately obvious. As previously mentioned, when you perform a search on Flickr the result are paged. By default a page will be 100 photos big, and the above search will return the first 100 photos (or less if there are fewer than that) for the specified search. Photos.TotalPhotos will give you the total number of photos available on the current search, while Photos.TotalPages will give you the number of pages that you need to return to get all the available photos. By modifying the Page property of the PhotoSearchOptions class, you can return more pages from Flickr.
The following code returns the second and then the third page of results from Flickr for the above search:
searchOptions.Page = 2;Photos microsoftPhotos2 = flickr.PhotosSearch(searchOptions);searchOptions.Page = 3;Photos microsoftPhotos3 = flickr.PhotosSearch(searchOptions);
searchOptions.Page = 2Dim microsoftPhotos2 As Photos = f.PhotosSearch(searchOptions)searchOptions.Page = 3Dim microsoftPhotos3 As Photos = f.PhotosSearch(searchOptions)
The Photos class also contains a PhotoCollection property, which is where the actual photos lie. You can add PhotoCollection objects together to collect together one or more page of results. You can also use the foreach statement to loop through all the photos in the collection.
PhotoCollection allPhotos = microsoftPhotos.PhotoCollection;allPhotos.AddRange(microsoftPhotos2.PhotoCollection);allPhotos.AddRange(microsoftPhotos3.PhotoCollection);foreach (Photo photo in allPhotos){ Console.Write("Photos title is " + photo.Title);}
Dim allPhotos As PhotoCollection = microsoftPhotos.PhotoCollectionallPhotos.AddRange(microsoftPhotos2.PhotoCollection)allPhotos.AddRange(microsoftPhotos3.PhotoCollection)For Each p As Photo In allPhotos Console.Write("Photos title is " & p.Title)Next For
To find a user you either have to search on their screen name or the URL of their home page (or use authentication, which we will get to later). If you have the screen name of a user (say "Sam Judson"), then the following will give you the User ID of that user.
string screenName = "Sam Judson";FoundUser user = flickr.PeopleFindByUsername(screenName);string userId = user.UserId;
Dim screenName As String = "Sam Judson"Dim user As FoundUser = f.PeopleFindByUsername(screenName)Dim userId As String = user.UserId
This user ID can then be used to search that user's photos, favorites, groups, contacts, and so on. (Apologies to my British fans at the use of the American spelling of favorite throughout the API, but blame Flickr not me :-)
// First page of the users photos// Sorted by interestingnessPhotoSearchOptions userSearch = new PhotoSearchOptions ();userSearch.UserId = userId;userSearch.Sort = SortOrder.InterestingnessAsc;Photos usersPhotos = flickr.PhotosSearch(userSearch);// Get users contactsContacts contacts = flickr.ContactsGetPublicList(userId);// Get first page of a users favoritesPhotos usersFavoritePhotos = flickr.FavoritesGetPublicList(userId);// Get a list of the users groupsPublicGroupInfo[] usersGroups = flickr.PeopleGetPublicGroups(userId);
' First page of the users photos' Sorted by interestingnessDim userSearch As PhotoSearchOptions = New PhotoSearchOptions()userSearch.UserId = userIduserSearch.Sort = SortOrder.InterestingnessAscDim usersPhotos As Photos = flickr.PhotosSearch(userSearch)' Get users contactsDim contacts As Contacts = flickr.ContactsGetPublicList(userId)' Get first page of a users favoritesDim usersFavoritePhotos As Photos = flickr.FavoritesGetPublicList(userId)' Get a list of the users groupsDim usersGroups As PublicGroupInfo() = flickr.PeopleGetPublicGroups(userId)
I could sit here all day and list every method in the API, but basically if it's in the Flickr API, it's in the .Net library, and the method name will be the same as the Flickr method — but without the full stops and the flickr bit at the beginning. For example, "flickr.people.getPublicGroups" is "PeopleGetPublicGroups" in the Flickr.Net API Library.
This perhaps is the most complex part of any application (at least in terms of interaction with Flickr), so we will cover it step-by-step. The example application provided at the end will have a complete implementation of desktop authentication in it. Web-based authentication is slightly different and is covered at the end.
Frob? What the heck? I'm not quite sure what the word Frob means, but basically it's a temporary key that you can pass to Flickr, which will then ask the user to authenticate your application. Once it has been authenticated, you can use that same Frob to get the Authentication Token as well as the User ID of the authenticated user (see, told you we'd get back to that).
After you have the Frob, you must pass this, along with your API Key and the permissions you need to Flickr, in a signed URL. Fortunately, the library has a method that does the signing for you and returns the URL to redirect the user to. You will however need to know your Shared Secret (see "Get an API Key" above if you haven't got your shared secret yet) to calculate this URL.
Imagine you have two buttons on a form. The first reads Authenticate Me and the second reads Complete Authentication. The following code illustrates the code behind the form for the two buttons to authenticate a user for read/write permissions.
using FlickrNet;// Store the Frob in a private variableprivate string tempFrob;private string ApiKey = "1234abcd1234abcd1234";private string SharedSecret = "abcd1234abcd";protected void AuthMeButton_Click(object sender, EventArgs e){ // Create Flickr instance Flickr flickr = new Flickr(ApiKey, SharedSecret); // Get Frob tempFrob = flickr.AuthGetFrob(); // Calculate the URL at Flickr to redirect the user to string flickrUrl = flckr.AuthCalcUrl(tempFrob, AuthLevel.Write); // The following line will load the URL in the users default browser. System.Diagnostics.Process.Start(flickrUrl);}protected void CompleteAuthButton_Click(object sender, EventArgs e){ // Create Flickr instance Flickr flickr = new Flickr(ApiKey, SharedSecret); try { // use the temporary Frob to get the authentication Auth auth = flickr.AuthGetToken(tempFrob); // Store this Token for later usage, // or set your Flickr instance to use it. Console.WriteLine("User authenticated successfully"); Console.WriteLine("Authentication token is " + auth.Token); flickr.ApiToken = auth.Token;Console.WriteLine("User id is " + auth.UserId); } catch(FlickrException ex) { // If user did not authenticat your application // then a FlickrException will be thrown. Console.WriteLine("User did not authenticate you"); Console.WriteLine(ex.ToString()); }}
Imports FlickrNet' Store the Frob in a private variablePrivate tempFrob As StringPrivatePrivate.ApiToken = a.TokenConsole.WriteLine("User id is " & a.UserId) Catch ex As FlickrException ' If user did not authenticat your application ' then a FlickrException will be thrown. Console.WriteLine("User did not authenticate you") Console.WriteLine(ex.ToString()) End TryEnd Sub
As you can see, if the user does not authenticate you, then the AuthGetToken method throws an exception which can be caught to handle this situation.
Once you have authenticated and have the user's Token, two things can happen. Firstly, you can now call new methods that are only available when authenticated, such as Flickr.PhotosSetTags(), which sets the tags for a given photo of the authenticated user. You can also now upload photos to that user's account.
Secondly though, some methods will now perform slightly differently. For example, the above search for photos with the "microsoft" tag will now return any of that user's private photos that have that tag, as well as their public and other users' public photos with the tag.
Setting the token can also be done at the constructor stage, or later.
Flickr flickr = new Flickr(ApiKey, SharedSecret, AuthToken); // or flickr.ApiToken = newToken;
Dim f As Flickr = New Flickr(ApiKey, SharedSecret, AuthToken)' or f.ApiToken = newToken
For Web-based authentication you must specify in the Edit Configuration page of your API key the page to redirect users to after they have authenticated your application. It's called the Callback URL. You also do not need to create a Frob when using Web-based authentication.
You can use the Flickr.AuthCalcWebUrl method to generate a URL to redirect the user to.
string url = flickr.AuthCalcWebUrl(AuthLevel.Write);Response.Redirect(url);
Dim url As String = f.AuthCalcWebUrl(AuthLevel.Write)Response.Redirect(url)
In the page specified by the Callback URL above you get passed the Frob in the query string; for example, if your callback URL is, then the user will be redirected to.
Then you can use the Frob to get the authentication token as in the example above:
protected void Page_OnLoad(object sender, EventArgs e){ string frob = Request.QueryString["frob"]; Flickr flickr = new Flickr(ApiKey, SharedSecret); Auth auth = flickr.AuthGetToken(frob); // Store the token somewhere for later calls}
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim frob As String = Request.QueryString("frob") Dim f As Flickr = New Flickr(ApiKey, SharedSecret) Dim a As Auth = f.AuthGetToken(frob) ' Store the token somewhere for later callsEnd Sub
Once you have authenticated a user, you can upload a photo to their account.
This is actually really simple to do. It can either be done directly from a file on the hard drive or from a Stream object.
string file = "test.jpg";string title = "Test Photo";string descripton = "This is the description of the photo";string tags = "tag1,tag2,tag3";string photoId = flickr.UploadPicture(file, title, dscription, tags);
Dim file As String = "test.jpg"Dim title As String = "Test Photo"Dim descripton As String = "This is the description of the photo"Dim tags As String = "tag1,tag2,tag3"Dim photoId As String = f.UploadPicture(file, title, dscription, tags)
The method has parameters for setting the photo's title, description, tags, and whether the photo is public of private (three optional Boolean parameters not shown above).
Once you've uploaded some a photo you might want to add it to a set, add it to a group or two, or update the description, title or add some tags.
The following code updates the title and description of the previously uploaded photo, and then adds it to the first set in the user's list of sets.
flickr.PhotosSetMeta(photoId, "New Title", "New Description");// Get list of users sets Photosets sets = flickr.PhotosetsGetList();// Get the first set in the collectionPhotoset set = sets.PhotosetCollection[0];// Add the photo to that setflickr.PhotosetsAddPhoto(set.PhotosetId, photoId);
f.PhotosSetMeta(photoId, "New Title", "New Description")' Get list of users setsDim sets As Photosets = f.PhotosetsGetList();' Get the first set in the collectionDim set As Photoset = sets.PhotosetCollection(0)' Add the photo to that setf.PhotosetsAddPhoto(set.PhotosetId, photoId)
As you can see, the actual mechanics of communicating with Flickr are relatively simple. The hard part is coming up with the idea for a groovy new application.
In the second part of this application, I will develop an application based on the WIA Coding 4 Fun article Look at Me! Windows Image Acquisition, which you could use to automatically upload images when you connect a compatible device to your computer.
If you would like to receive an email when updates are made to this post, please register here
RSS
Is there a way to authenticate a web application without redirecting the browser to the Flickr site? Can you get the frob from the response via a HttpWebResponse object?
Thanks!
i've got an api key, a shared secret, and i am able to create a new flickr object... but using the downloaded code, exactly as it is, as soon as i get to the flickr.photosearch(...), or flickr.AuthGetFrob(...) etc, i get a timeout error... what could this be?
i'm sure its something really silly.
PingBack from
PingBack from
Could you provide a quick example of pulling back tags using taggetlistuserpopular and displaying on a page?
Thanks,
Chuck
chuck.l.johnson@sprint.com
PingBack from
I'm new with FlickrNet API. How can I retreive NOTES left for a pariticular photo?
(I'm able to retreive comments and tags using PhotosCommentsGetList(photo.PhotoId) and TagsGetListPhoto(photo.PhotoId) respectively)
Is it possible to download pdoto's from flicker to our system/web aps.
@ayyappan: yes you can, there are mass downloaders out there. There may be some stuff on CodePlex ()
PingBack from
PingBack from
PingBack from
Someone emailed me recently saying that they couldn’t find enough examples in .NET for talking to the
I want add Geo tag in your c# source.
and upload by flickr
what do you thinking?
Hi I am trying to upload a photo on Flickr from stream by using UploadPicture, but getting an error "Filetype was not recognised (5)". Where can be a problem?
@progr what is the file's name you're putting in? Have you stepped into it with a debugger to verify you're putting in the entire file name with extention. AKA "myCoolPic", when "myCoolPic.jpg" is expected
photo.OwnerName and photo.CleanTags returns null and photo.DateUploaded returns 1/1/0001 12:00:00 AM.
photo.Title, however, returns fine,
Any ideas?
@coder: Are you sure you have a valid photo return?
The flickr API may have shifted since this article was published, I'd check out for a more up-to-date SDK
Pls tell us how to resolve the time out error ?
thanks in advance..
@AB K what time out issue? do you have the most up to date bits? Have you looked around for a timeout setting?
Great tutorial!!!
I was able to do wonders because of it - hehehe...
how do i authenticate without redirecting to the browser programatically
HI all,
How to get the most popular or latest images of that particular day from FLICKR using FLICKR API
with regards
Hi Coding4Fun,
"@coder: Are you sure you have a valid photo return?"
What do u mean by a valid photo return? Can you please elaborate on this a bit more?
I am also getting the photo.DateUploaded & DateTaken values as 1/1/0001 12:00:00 AM. I am getting the other details correctly i.e. the title and source of the image.
One more thing is I am getting this error when I look for pictures in photostream using flickr.PhotosGetNotInSet() & pictures inside an album using flickr.PhotosetsGetPhotos(). When I search for albums uisng flickr.PhotosetsGetList(), I get the album dates correctly.
Thanks in advance.
Samar
@jaikit you can't. It is a security measure so the person that uses your application proves they give you the developer permission to access their photos.
Guys, At Coding4Fun, we try to show off some cool stuff you can do with the APIs. This article is almost 2.5 years old and FlickrNet API chances are has been tweaked and new features have been added. Check it out on codeplex ().
If you have an issue with the Flickr object, flickrnet are the guys to ask as they wrote it. for issues. for discussions. | http://blogs.msdn.com/coding4fun/archive/2006/11/22/1126978.aspx | crawl-002 | refinedweb | 3,391 | 57.06 |
prctl - operations on a process
#include <sys/prctl.h>
int
prctl(int option, unsigned long
arg2, unsigned long arg3,
unsigned long arg4, unsigned long arg5);. within its user namespace, this bit).)
Normally, this flag is set to 1. However, it is reset to the current value contained in the file /proc/sys/fs/suid_dumpable (which by default has the value 0), in the following circumstances:)
Get if.
The calling process must have the CAP_SYS_RESOURCE capability. The value in arg2 is one of the options below, while arg3 provides a new value for the option. The arg4 and arg5 arguments must be zero if unused.
Since Linux
3.10, this feature is available all the time.).
The second limitation is that such transitions can be done only once in a process life time. Any further attempts will be rejected. This should help system administrators monitor unusual symbolic-link transitions over all processes running on a system..).
PR_SET_NO_NEW_PRIVS (since Linux 3.5)
Set the calling thread mode bits, and file capabilities non-functional). Once set, this bit cannot be unset. The setting of this bit is inherited by children created by fork(2) and clone(2), and preserved across execve(2).
Since Linux 4.10, the value of a thread’s no_new_privs bit. This operation is available only if the kernel is configured with CONFIG_SECCOMP enabled.
With arg2 set to SECCOMP_MODE_FILTER (since Linux 3.5), the system calls allowed are defined by a pointer to a Berkeley Packet Filter passed in arg3. This argument is a pointer to struct sock_fprog; it can be designed to filter arbitrary system calls and system call arguments. This mode is available only if the kernel is configured with CONFIG_SECCOMP_FILTER enabled.
If SECCOMP_MODE_FILTER filters permit fork(2), then the seccomp mode is inherited by children created by fork(2); if execve(2) is permitted, then the seccomp mode is preserved across execve(2). If the filters permit prctl() calls, then additional filters can be added; they are run in order until the first non-allow result is seen.
For further information, see the kernel source file Documentation/userspace-api/seccomp_filter.rst (or Documentation/prctl/seccomp_filter.txt before Linux 4.13).
PR_GET_SECCOMP (since Linux 2.6.23)
Return (as the function result); (via the function result) the current setting of the "THP disable" flag for the calling thread: either 1, if the flag is set, or 0, if it is not... If the nanosecond value supplied in arg2 is greater than zero, then the "current" value is set to this value. If arg2 is less than or equal to zero, the "current" timer slack is reset to the thread’s "default" timer slack value.
The "current".
The timer expirations affected by timer slack are those set by select(2), pselect(2), poll(2), ppoll(2), epoll_wait(2), epoll_pwait(2), clock_nanosleep(2), nanosleep(2), and futex(2) (and thus the library functions implemented via futexes, including pthread_cond_timedwait(3), pthread_mutex_timedlock(3), pthread_rwlock_timedrdlock(3), pthread_rwlock_timedwrlock(3), and sem_timedwait(3)).
Timer slack is not applied to threads that are scheduled under a real-time scheduling policy (see sched_setscheduler(2)).
When a new thread is created, the two timer slack values are made the same as the "current" value of the creating thread. Thereafter, a thread can adjust its "current" timer slack value via PR_SET_TIMERSLACK. The "default" value can’t be changed. The timer slack values of init (PID 1), the ancestor of all processes, are 50,000 nanoseconds (50 microseconds). The timer slack values are preserved across execve(2).
Since Linux 4.6, the "current" timer slack value of any process can be examined and changed via the file /proc/[pid]/timerslack_ns. See proc(5).
PR_GET_TIMERSLACK (since Linux 2.6.28)
Return (as the function result);, and (if it returns) PR_GET_SECCOMP return the nonnegative values described above. All other option values return 0 on success. On error, -1 is returned, and errno is set appropriately.
*
option is PR_SET_PTRACER and arg2 is not 0, PR_SET_PTRACER_ANY, or the PID of an existing process.
EOPNOTSUPP
option is PR_SET_FP_MODE and arg2 has an invalid or unsupported value.
The prctl() system call was introduced in Linux 2.1.57., and so on.
signal(2), core(5)
This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://www.zanteres.com/manpages/prctl.2.html | CC-MAIN-2022-33 | refinedweb | 733 | 58.69 |
Hi I need a bit more help :-( I have Minimal knowledge of Python so would need you to expand
In essence I know the directory (or directories) I want to clear of rubbish, I know the extention or extentions which are no longer of use I simply need to know programatically (Python) how to achieve that!
I can get as far as listing all the files in the directory with that extention, but how do I then expand this to delete files over say 14 days old?
essentially this is all I have got as far as:
import os
import sys
import datetime
import time
import win32api
# dicGeneralInfo = {}
dicGeneralInfo["Current Time"]=time.time()
dicGeneralInfo["Directory"]="C:\blah\blah"
listdir = os.listdir(dicGeneralInfo["Directory"])
for strFile in listdir:
strFullFileName = dicGeneralInfo["Directory"] + os.sep + strFile if os.path.isdir(strFullFileName): continue if strFullFileName[-3:].upper() == 'TXT':
now I need to work out how to remove the file(s) if over a certain age
print strFullFileName
There are probably 1001 ways to do this simple task !
Cheers
All
txt files in blah.
import glob for files in glob.glob(r'C:\blah\blah\*.txt'): print files
Run all files through os.stat
What you need is
st_mtime,nr 8.
Now you can use time(
import time),make a time object for
14_days back.
Make a time object from
os.stat(files)[8]call it
lastmod_date.
Compare
14_days > lastmod_date use
os.remove(files) on files it give back.
Hi I am still struggling with this can you give a worked example from your post above I don't understand :
How to / where to use this : st_mtime,nr 8
How to : Make a time object from os.stat(files)[8]call it lastmod_date.
Compare 14_days > lastmod_date use os.remove(files) on files it give back
Thanks
Test print will show you files to be removed.
If ok remove comment(#) from
#os.remove(files)
Remember never use
C:\
Use raw string
r'C:\' or
C:\\ or
C:/
import os, glob, time, sys def file_remove(folder, exp_date, ext): '''Remove files older than than set exp_date''' for files in glob.glob('{}\*.{}'.format(folder, ext)): lastmod_date = time.localtime(os.stat(files)[8]) ex_date = time.strptime(exp_date, '%d-%m-%Y') if ex_date > lastmod_date: try: #Test print print 'Removing {} {}'.format(files, time.strftime("<Older than %d-%m-%Y>",ex_date)) #os.remove(files) except OSError: print 'Could not remove {}'.format(files) if __name__ == '__main__': folder = r'C:\test' #Expiration date format DD-MM-YYYY exp_date = '07-02-2013' #File extension to search for ext = 'txt' file_remove(folder, exp_date, ext)
Here's a quick example (using what snippsat posted):
import glob import time import os def dl(): t = 1361455258.34 #a generic moment of time, access time of a file from the folder Blah for files in glob.glob(r'E:\blah\*.txt'): print(files) #prints the content of the folder if (os.stat(files).st_atime>t): os.remove(files) #removes desired files print('\n\n\n') for files in glob.glob(r'E:\blah\*.txt'): print(files) #prints the remaining files from the folder dl()
[Edit]: Also, look above :). 59 seconds earlier than me, but cleaner/better example.
There is also a code snippet here at DaniWeb: | https://www.daniweb.com/software-development/python/threads/448256/noobie-to-python-question-windows-files | CC-MAIN-2015-11 | refinedweb | 535 | 58.79 |
Revision history for Perl module Net::FreshBooks::API {{$NEXT}} - Change the no_index rule in order to get PAUSE to remove OAuthDemo from 02packages - Disable all live tests by default 0.23 2011-10-20 - Silences warning which emitted a lot of XML outside of verbose mode - Adds docs on how to paginate using an iterator 0.22 2011-09-14 - Adds support for client contacts 0.21 2011-04-05 - AutoBill details may now be deleted from Recurring Items 0.20 2011-03-30 - Adds support for Language and Gateway - Adds AutoBill to Recurring Items 0.19 2011-01-17 - Removes warn statement left over from debugging - Adds namespace::autoclean and B::Hooks::EndOfScope to prereqs to get around "Undefined subroutine &namespace;::autoclean::on_scope_end" - Adds "parent" to prereqs to fix another CPANTS error report - Adds account_name to required fields for demo script - Adds no_index to META for OAuthDemo, which should not be appearing in search.cpan results - Adds documentation, specifically for Moose roles 0.18 2010-11-17 - API::oauth() method now returns a new object with each call - Added API::account_name_ok() for name validation - Added Net::FreshBooks::API::Error for exception handling 0.17 2010-11-12 - Fixed bad test plan 0.16 2010-11-11 - Updated documention for Estimate.pm - folder is now marked as read-only for Estimate, Invoice and Client 0.15 2010-11-10 - Estimate objects are now supported - Moved some methods into Moose roles 0.14 2010-10-07 - Fixed missing package versions which broke distro 0.13 0.13 2010-10-07 - Fixed a bug where OAuth URL would not always use correct account_name when being constructed - Added many new API params - Expanded documentation and shifted much off of API.pm and to the subclasses 0.12 2010-09-07 - Fixed some documentation errors - Documented OAuth-specific methods - Overriding more of Net::OAuth::Simple in order to get examples working 0.11 2010-09-07 - Move distro management to Dist::Zilla - Added OAuth support - Moved a lot of the documentation from API.pm to sub-modules 0.10 2009-11-23 - Iterator's next() method now returns cloned objects - Added documentation for Invoice links() and list() functionality 0.09 2009-11-19 - Added amount field to recurring items - Added documentation for the lines() method 0.08 2009-11-18 - Added explicit documentation for most available methods - Calls to list() no longer perform lookups while iteration takes place. This is no longer necessary after updates to the FreshBooks API in August 2009. This means that one call to list returns all of the data for the requested page, so multiple API calls are no longer necessary. Multiple API calls will still, of course, be necessary if fetching multiple pages. 0.07 2009-11-05 - Added return_uri field to Invoice and Recurring - perltidied source 0.06 2009-08-12 - Added XML::Simple and Test::Exception to dependencies - Improved handling of error messages. In many cases the returned message had been blank - Added test to confirm error messages are being correctly parsed - "verbose" setting is now referred to in the documentation (if only briefly) 0.05 2009-07-16 - Added Crypt::SSLeay to dependencies to fix the following error: "501 Protocol scheme 'https' is not supported (Crypt::SSLeay or IO::Socket::SSL not installed)" - Added Path::Class to dependencies 0.04 2009-07-15 - Fixed failing test t/007_live_test.t "Can't call method "childNodes" on an undefined value at /tmp/net-freshbooks-api/lib/Net/FreshBooks/API/Base.pm line 174" 0.03 2009-07-13 - Fixed file names in MANIFEST 0.02 2009-07-10 - Added Net::FreshBooks::API::Recurring - Created a new FreshBooks test account because the original account was failing the ping method - Added tests for recurring items - Added a sample script: examples/create_recurring_item.pl - Added some POD tests - All modules now pass Perl::Critic severity 4 | https://metacpan.org/changes/distribution/Net-FreshBooks-API | CC-MAIN-2019-43 | refinedweb | 645 | 55.64 |
Java Dates + Database Weird807606 Apr 27, 2007 10:30 AM
How come my Java dates from a web form are sometimes being stored in the MSSQL database in two different formats?
4/26/2007
and 4/26/2007 12:00:00 PM
4/26/2007
and 4/26/2007 12:00:00 PM
This content has been marked as final. Show 9 replies
1. Re: Java Dates + Database Weird807606 Apr 27, 2007 11:05 AM (in response to 807606)Are the column types DATE and DATETIME, respectively?
2. Re: Java Dates + Database Weird807606 Apr 27, 2007 11:54 AM (in response to 807606)The column is datetime, its one column and I get both the styles of date in it.
3. are u formatting807606 Apr 27, 2007 1:40 PM (in response to 807606)are u formatting the date in java or just creates the date object?
I suppose you use sql date object, right?
4. Re: are u formatting807606 Apr 27, 2007 1:45 PM (in response to 807606)Here's my few code snippets.
import java.sql.Date; /* */ Date cdate = new Date(new java.util.Date().getTime()); Connection con = null; /* */ con = ConnectionFactoryWrapper.getConnection("sqlServer"); ps = con.prepareStatement(insert); ps.setInt(1, id); ps.setInt(2, rating); ps.setString(3, comments); ps.setTime(4, cdate.getTime()); ps.setString(5, completed); ps.execute(); /* */
5. Re: are u formatting807606 Apr 27, 2007 1:47 PM (in response to 807606)Yup. Those are the next questions:
Show me the code:
1. How do you insert dates?
2. How do you retrieve dates?
3. How do you format dates for display?
6. Re: are u formatting807606 Apr 27, 2007 1:49 PM (in response to 807606)
ps.setTime(4, cdate.getTime());??? You've never stated what you want in that column -- the date, the date&time, or just the time?
7. Re: are u formatting807606 Apr 27, 2007 1:53 PM (in response to 807606)Date and time is preferred. Right now it randomly does both so when I run a query on the data it doesn't make it easy to grab it all.
8. Re: are u formatting807606 Apr 27, 2007 2:03 PM (in response to 807606)
Date and time is preferred.In that case, I would use java.sql.Timestamp and PreparedStatement method setTimestamp. A java.sql.Date is supposed to be normalized to have hours, minutes, seconds and millis all set to 0.
9. Re: this could work807606 Apr 27, 2007 2:03 PM (in response to 807606)The problem cold be this
remember, getTome() returns the date in miliseconds, my solutions is:
ps.setTime(4, cdate.getTime());
you insert statement cold look like "insert into table vales (CAST('?' AS datetime))"
and add this to your code
GregorianCalendar gc = new GregorianCalendar(); gc.setTime(cdate.getTime()); int year = gc.get( GregorianCalendar.YEAR ); int month = gc.get( GregorianCalendar.MONTH )+1; int day = gc.get( GregorianCalendar.DAY_OF_MONTH ); String cDate = month+"/"+day+"/"year; //instead of //ps.setTime(4, cdate.getTime()); //use ps.setString(4, cDate); | https://community.oracle.com/message/8963977 | CC-MAIN-2015-27 | refinedweb | 503 | 68.06 |
Introduction
The git plugin provides fundamental git operations for Jenkins projects. It can poll, fetch, checkout, branch, list, merge, tag, and push repositories.
- Introduction
- Changelog in GitHub Releases
- Pipelines
- Configuration
- Git Credential Binding
- Extensions
- Environment Variables
- Properties
- Git Publisher
- Combining repositories
- Bug Reports
- Contributing to the Plugin
- Remove Git Plugin BuildsByBranch BuildData Script
Changelog in GitHub Releases
Release notes are recorded in GitHub Releases since July 1, 2019 (git plugin 3.10.1 and later). Prior release notes are recorded in the git plugin repository change log.
Pipelines
The git plugin provides an SCM implementation to be used with the Pipeline SCM
checkout step. The Pipeline Syntax Snippet Generator guides the user to select checkout options.
The 90 second video clip below introduces the Pipeline Syntax Snippet Generator and shows how it is used to generate steps for the Jenkins Pipeline.
Multibranch Pipelines
The git plugin includes a multibranch provider for Jenkins Multibranch Pipelines and for Jenkins Organization Folders. The git plugin multibranch provider is a "base implementation" that uses command line git. Users should prefer the multibranch implementation for their git provider when one is available. Multibranch implementations for specific git providers can use REST API calls to improve the Jenkins experience and add additional capabilities. Multibranch implementations are available for GitHub, Bitbucket, GitLab, Gitea, and Tuleap.
The 30 minute video clip below introduces Multibranch Pipelines.
Git Credentials Binding
The git plugin provides
Git Username and Password binding that allows authenticated git operations over HTTP and HTTPS protocols using command line git in a Pipeline job.
The git credential bindings are accessible through the
withCredentials step of the Credentials Binding plugin. The binding retrieves credentials from the Credentials plugin.
Git Username and Password Binding
This binding provides authentication support over HTTP protocol using command line git in a Pipeline job.
- Procedure
Click the Pipeline Syntax Snippet Generator and choose the
withCredentialsstep, add Git Username and Password binding.
Choose the required credentials and Git tool name, specific to the generated Pipeline snippet.
Two variable bindings are used,
GIT_USERNAME and
GIT_PASSWORD, to pass the username and password to
sh,
bat, and
powershell steps inside the
withCredentials block of a Pipeline job. The variable bindings are available even if the
JGit or
JGit with Apache HTTP Client git implementation is being used.' }
Configuration
Repositories
The git plugin fetches commits from one or more remote repositories and performs a checkout in the agent workspace. Repositories and their related information include:
- Repository URL
The URL of the remote repository. The git plugin passes the remote repository URL to the git implementation (command line or JGit). Valid repository URL’s include
https,
ssh,
scp,
git,
local file, and other forms. Valid repository URL forms are described in the git documentation.
- Credentials
Credentials are defined using the Jenkins credentials plugin. They are selected from a drop-down list and their identifier is stored in the job definition. Refer to using credentials for more details on supported credential types.
- Name
Git uses a short name to simplify user references to the URL of the remote repository. The default short name is
origin. Other values may be assigned and then used throughout the job definition to refer to the remote repository.
- Refspec
A refspec maps remote branches to local references. It defines the branches and tags which will be fetched from the remote repository into the agent workspace.
A refspec defines the remote references that will be retrieved and how they map to local references. If left blank, it will default to the normal
git fetchbehavior and will retrieve all branches. This default behavior is sufficient for most cases.
The default refspec is
+refs/heads/*:refs/remotes/REPOSITORYNAME/where REPOSITORYNAME is the value you specify in the above repository "Name" field. The default refspec retrieves all branches. If a checkout only needs one branch, then a more restrictive refspec can reduce the data transfer from the remote repository to the agent workspace. For example,
+refs/heads/master:refs/remotes/origin/masterwill retrieve only the master branch and nothing else.
The refspec can be used with the honor refspec on initial clone option in the advanced clone behaviors to limit the number of remote branches mapped to local references. If "honor refspec on initial clone" is not enabled, then a default refspec for its initial fetch. This maintains compatibility with previous behavior and allows the job definition to decide if the refspec should be honored on initial clone.
Multiple refspecs can be entered by separating them with a space character. The refspec value
+refs/heads/master:refs/remotes/origin/master +refs/heads/develop:refs/remotes/origin/developretrieves the master branch and the develop branch and nothing else.
Refer to the git refspec documentation for more refspec details.
Using Credentials
The git plugin supports username / password credentials and private key credentials provided by the Jenkins credentials plugin. It does not support other credential types like secret text, secret file, or certificates. Select credentials from the job definition drop down menu or enter their identifiers in Pipeline job definitions.
When the remote repository is accessed with the HTTP or HTTPS protocols, the plugin requires a username / password credential. Other credential types will not work with HTTP or HTTPS protocols.
When the remote repository is accessed with the ssh protocol, the plugin requires an ssh private key credential. Other credential types will not work with the ssh protocol.
Push Notification From Repository
To minimize the delay between a push and a build, configure the remote repository to use a Webhook to notify Jenkins of changes to the repository. Refer to webhook documentation for your repository:
Other git repositories can use a post-receive hook in the remote repository to notify Jenkins of changes. Add the following line in your
hooks/post-receive file on the git server, replacing <URL of the Git repository> with the fully qualified URL you use when cloning the repository.
curl<URL of the Git repository>
This will scan all the jobs that:
Have Build Triggers > Poll SCM enabled. No polling schedule is required.
Are configured to build the repository at the specified URL
For jobs that meet these conditions, polling will be triggered. If polling finds a change worthy of a build, a build will be triggered.
This allows a notify script to remain the same for all Jenkins jobs. polls to verify that there is a change before it actually starts a build.
When notifyCommit is successful, the list of triggered projects is returned.
Enabling JGit
See the git client plugin documentation for instructions to enable JGit. JGit becomes available throughout Jenkins once it has been enabled.
Global Configuration
In the
Configure System page, the Git Plugin provides the following options:
- Global Config user.name Value
Defines the default git user name that will be assigned when git commits a change from Jenkins. For example,
Janice Examplesperson. This can be overridden by individual projects with the Custom user name/e-mail address extension.
- Global Config user.email Value
Defines the default git user e-mail that will be assigned when git commits a change from Jenkins. For example,
janice.examplesperson@example.com. This can be overridden by individual projects with the Custom user name/e-mail address extension.
- Show the entire commit summary in changes
The
changespage for each job would truncate the change summary prior to git plugin 4.0. With the release of git plugin 4.0, the default was changed to show the complete change summary. Administrators that want to restore the old behavior may disable this setting.
- Hide credential usage in job output
If checked, the console log will not show the credential identifier used to clone a repository.
- Disable performance enhancements
If JGit and command line git are both enabled on an agent, the git plugin uses a "git tool chooser" to choose a preferred git implementation. The preferred git implementation depends on the size of the repository and the git plugin features requested by the job. If the repository size is less than the JGit repository size threshold and the git features of the job are all implemented in JGit, then JGit is used. If the repository size is greater than the JGit repository size threshold or the job requires git features that are not implemented in JGit, then command line git is used.
If checked, the plugin will disable the feature that recommends a git implementation on the basis of the size of a repository. This switch may be used in case of a bug in the performance improvement feature. If you enable this setting, please report a git plugin issue that describes why you needed to enable it.
- Preserve second fetch during initial checkout
If checked, the initial checkout step will not avoid the second fetch. Git plugin versions prior to git plugin 4.4 would perform two fetch operations during the initial repository checkout. Git plugin 4.4 removes the second fetch operation in most cases. Enabling this option will restore the second fetch operation. This setting is only needed if there is a bug in the redundant fetch removal logic. If you enable this setting, please report a git plugin issue that describes why you needed to enable it.
- Add git tag action to jobs
If checked, the git tag action will be added to any builds that happen after the box is checked. Prior to git plugin 4.5.0, the git tag action was always added. Git plugin 4.5.0 and later will not add the git tag action to new builds unless the administrator enables it.
The git tag action allows a user to apply a tag to the git repository in the workspace based on the git commit used in the build applying the tag. The git plugin does not push the applied tag to any other location. If the workspace is removed, the tag that was applied is lost. Tagging a workspace made sense when using centralized repositories that automatically applied the tag to the centralized repository. Applying a git tag in an agent workspace doesn’t have many practical uses.
Repository Browser
A Repository Browser adds links in "changes" views within Jenkins to an external system for browsing the details of those changes. The "Auto" selection attempts to infer the repository browser from the "Repository URL" and can detect cloud versions of GitHub, Bitbucket and GitLab.
Repository browsers include:
AssemblaWeb
- Assembla Git URL
Root URL serving this Assembla repository. For example,
FishEye
Repository browser for git repositories hosted by Atlassian Fisheye. Options include:
- URL
Root URL serving this FishEye repository. For example,
Kiln
- URL
Root URL serving this Kiln repository. For example,
Microsoft Team Foundation Server/Visual Studio Team Services
Repository browser for git repositories hosted by Azure DevOps. Options include:
- URL or name
Root URL serving this Azure DevOps repository. For example,.
bitbucketweb
- URL
Root URL serving this Bitbucket repository. For example,
bitbucketserver
Repository browser for git repositories hosted by an on-premises Bitbucket Server installation. Options include:
- URL
Root URL serving this Bitbucket repository. For example,
cgit
- URL
Root URL serving this cgit repository. For example,
gitblit
- GitBlit root url
Root URL serving this GitBlit repository. For example,
- Project name in GitBlit
Name of the GitBlit project. For example,
my-project
githubweb
- URL
Root URL serving this GitHub repository. For example,
gitiles
- gitiles root url
Root URL serving this Gitiles repository. For example,
gitlab
- URL
Root URL serving this GitLab repository. For example,
- Version
Major and minor version of GitLab you use, such as 12.6. If you don’t specify a version, a modern version of GitLab (>= 8.0) is assumed. For example,
12.6
gitlist
- URL
Root URL serving this GitList repository. For example,
gitoriousweb
Gitorious was acquired in 2015. This browser is deprecated.
- URL
Root URL serving this Gitorious repository. For example,
gitweb
- URL
Root URL serving this GitWeb repository. For example,
gogs
- URL
Root URL serving this Gogs repository. For example,
phabricator
Repository browser for git repositories hosted by Phacility Phabricator. Options include:
- URL
Root URL serving this Phabricator repository. For example,
- Repository name in Phab
Name of the Phabricator repository. For example,
my-project
redmineweb
- URL
Root URL serving this Redmine repository. For example,
rhodecode
- URL
Root URL serving this RhodeCode repository. For example,
stash
Stash is now called BitBucket Server. Repository browser for git repositories hosted by BitBucket Server. Options include:
- URL
Root URL serving this Stash repository. For example,
Git Credential Binding
The git plugin provides one binding to support authenticated git operations over HTTP or HTTPS protocol, namely
Git Username and Password. The git plugin depends on the Credential Binding Plugin to support these bindings.
To access the
Git Username and Password binding in a Pipeline job, visit Git Credentials Binding
Freestyle projects can use git credential binding with the following steps:
Check the box Use secret text(s) or file(s), add Git Username and Password binding.
Choose the required credentials and Git tool name.
Two variable bindings are used,
GIT_USERNAME and
GIT_PASSWORD, to pass the username and password to shell, batch, and powershell steps in a Freestyle job. The variable bindings are available even if the
JGit or
JGit with Apache HTTP Client git implementation is being used.
Extensions
Extensions add new behavior or modify existing plugin behavior for different uses. Extensions help users more precisely tune the plugin to meet their needs.
Extensions include:
Clone Extensions
Clone extensions modify the git operations that retrieve remote changes into the agent workspace. The extensions can adjust the amount of history retrieved, how long the retrieval is allowed to run, and other retrieval details.
Advanced clone behaviours
breadth of history retrieval (refspecs)
depth of history retrieval (shallow clone)
disc space use (reference repositories)
duration of the command (timeout)
tag retrieval
Advanced clone behaviors include:
- Honor refspec on initial clone
Perform initial clone using the refspec defined for the repository. This can save time, data transfer and disk space when you only need to access the references specified by the refspec. If this is not enabled, then the plugin default refspec includes all remote branches.
- Shallow clone
Perform a shallow clone by requesting a limited number of commits from the tip of the requested branch(es). Git will not download the complete history of the project. This can save time and disk space when you just want to access the latest version of a repository.
- Shallow clone depth
Set shallow clone depth to the specified number of commits. Git will only download
depthcommits from the remote repository, saving time and disk space.
- Path of the reference repo to use during clone
Specify a folder containing a repository that will be used by git as a reference during clone operations. This option will be ignored if the folder is not available on the agent.
- Timeout (in minutes) for clone and fetch operations
Specify a timeout (in minutes) for clone and fetch operations.
- Fetch tags
Deselect this to perform a clone without tags, saving time and disk space when you want to access only what is specified by the refspec, without considering any repository tags.
Prune stale remote tracking branches
Removes remote tracking branches from the local workspace if they no longer exist on the remote. See
git remote prune and
git fetch --prune for more details.
Prune stale tags
Removes tags from the local workspace before fetch if they no longer exist on the remote. If stale tags are not pruned, deletion of a remote tag will not remove the local tag in the workspace. If the local tag already exists in the workspace, git correctly refuses to create the tag again. Pruning stale tags allows the local workspace to create a tag with the same name as a tag which was removed from the remote.
Checkout extensions modify the git operations that place files in the workspace from the git repository on the agent. The extensions can adjust the maximum duration of the checkout operation, the use and behavior of git submodules, the location of the workspace on the disc, and more.
Advanced checkout behaviors
Advanced checkout behaviors modify the
git checkout command. Advanced checkout behaviors include
- Timeout (in minutes) for checkout operation
Specify a timeout (in minutes) for checkout. The checkout is stopped if the timeout is exceeded. Checkout timeout is usually only required with slow file systems or large repositories.
Advanced sub-modules behaviours
Advanced sub-modules behaviors modify the
git submodule commands. They control:
depth of history retrieval (shallow clone)
disc space use (reference repositories)
credential use
duration of the command (timeout)
concurrent threads used to fetch submodules
Advanced sub-modules include:
- Disable submodules processing
Ignore submodules in the repository.
- Recursively update submodules
Retrieve all submodules recursively. Without this option, submodules which contain other submodules will ignore the contained submodules.
- Update tracking submodules to tip of branch
Retrieve the tip of the configured branch in .gitmodules.
- Use credentials from default remote of parent repository
Use credentials from the default remote of the parent project. Submodule updates do not use credentials by default. Enabling this extension will provide the parent repository credentials to each of the submodule repositories. Submodule credentials require that the submodule repository must accept the same credentials as the parent project. If the parent project is cloned with https, then the authenticated submodule references must use https as well. If the parent project is cloned with ssh, then the authenticated submodule references must use ssh as well.
- Path of the reference repo to use during submodule update
Folder containing a repository that will be used by git as a reference during submodule clone operations. This option will be ignored if the folder is not available on the agent running the build. A reference repository may contain multiple subprojects. See the combining repositories section for more details.
- Timeout (in minutes) for submodule operations
Specify a timeout (in minutes) for submodules operations. This option overrides the default timeout.
- Number of threads to use when updating submodules
Number of parallel processes to be used when updating submodules. Default is to use a single thread for submodule updates
- Shallow clone
Perform shallow clone of submodules. Git will not download the complete history of the project, saving time and disk space.
- Shallow clone depth
Set shallow clone depth for submodules. Git will only download recent history of the project, saving time and disk space.
Checkout to a subdirectory of the workspace instead of using the workspace root.
This extension should not be used in Jenkins Pipeline (either declarative or scripted). Jenkins Pipeline already provides standard techniques for checkout to a subdirectory. Use
ws and
dir in Jenkins Pipeline rather than this extension.
- Local subdirectory for repo
Name of the local directory (relative to the workspace root) for the git repository checkout. If left empty, the workspace root itself will be used.
- Branch name
If given, checkout the revision to build as HEAD on the named branch. If value is an empty string or "**", then the branch name is computed from the remote branch without the origin. In that case, a remote branch 'origin/master' will be checked out to a local branch named 'master', and a remote branch 'origin/develop/new-feature' will be checked out to a local branch named 'develop/new-feature'. If a specific revision and not branch HEAD is checked out, then 'detached' will be used as the local branch name.
Wipe out repository and force clone
Delete the contents of the workspace before build and before checkout. Deletes the git repository inside the workspace and will force a full clone.
Clean after checkout
Clean the workspace after every checkout by deleting all untracked files and directories, including those which are specified in
.gitignore. Resets all tracked files to their versioned state. Ensures that the workspace is in the same state as if clone.
Clean before checkout
Clean the workspace before every checkout by deleting all untracked files and directories, including those which are specified in .gitignore. Resets all tracked files to their versioned state. Ensures that the workspace is in the same state as if cloned.
Sparse checkout paths
Specify the paths that you’d like to sparse checkout. This may be used for saving space (Think about a reference repository). Be sure to use a recent version of Git, at least above 1.7.10.
Multiple sparse checkout path values can be added to a single job.
- Path
File or directory to be included in the checkout
Git LFS pull after checkout
Enable git large file support for the workspace by pulling large files after the checkout completes. Requires that the controller and each agent performing an LFS checkout have installed
git lfs.
Changelog Extensions
The plugin can calculate the source code differences between two builds. Changelog extensions adapt the changelog calculations for different cases.
Calculate changelog against a specific branch
'Calculate changelog against a specific branch' uses the specified branch to compute the changelog instead of computing it based on the previous build. This extension can be useful for computing changes related to a known base branch, especially in environments which do not have the concept of a "pull request".
- Name of repository
Name of the repository, such as 'origin', that contains the branch.
- Name of branch
Name of the branch used for the changelog calculation within the named repository.
Tagging Extensions
Tagging extensions allow the plugin to apply tags in the current workspace.
Build Initiation Extensions
The git plugin can start builds based on many different conditions. The build initiation extensions control the conditions that start a build. They can ignore notifications of a change or force a deeper evaluation of the commits when polling
Don’t trigger a build on commit notifications
If checked, this repository will be ignored when the notifyCommit URL is accessed whether the repository matches or not.
Force polling using workspace
The git plugin polls remotely using
ls-remote when configured with a single branch (no wildcards!). When this extension is enabled, the polling is performed from a cloned copy of the workspace instead of using
ls-remote.
If this option is selected, polling will use a workspace instead of using
ls-remote.
By default, the plugin polls by executing a polling process or thread on the Jenkins controller. If the Jenkins controller does not have a git installation, the administrator may enable JGit to use a pure Java git implementation for polling. In addition, the administrator may need to disable command line git to prevent use of command line git on the Jenkins controller.
Polling ignores commits from certain users Push post-build action is selected.
- Excluded Users
If set and Jenkins is configured to poll for changes, Jenkins will ignore any revisions committed by users in this list when determining if a build should be triggered. This can be used to exclude commits done by the build itself from triggering another build, assuming the build server commits the change with a distinct SCM user. Using this behavior prevents the faster
git ls-remotepolling mechanism. It forces polling to require a workspace, as if you had selected the Force polling using workspace extension.
Each exclusion uses exact string comparison and must be separated by a new line. User names are only excluded if they exactly match one of the names in this list.
Polling ignores commits in certain paths
If set and Jenkins is configured to poll for changes, Jenkins will pay attention to included and/or excluded files and/or folders when determining if a build needs to be triggered.
Using this behavior will preclude the faster remote polling mechanism, forcing polling to require a workspace thus sometimes triggering unwanted builds, as if you had selected the Force polling using workspace extension as well. This can be used to exclude commits done by the build itself from triggering another build, assuming the build server commits the change with a distinct SCM user. Using this behavior will preclude the faster git ls-remote polling mechanism, forcing polling to require a workspace, as if you had selected the Force polling using workspace extension as well.
- Included Regions
Each inclusion uses java regular expression pattern matching, and must be separated by a new line. An empty list implies that everything is included.
- Excluded Regions
Each exclusion uses java regular expression pattern matching, and must be separated by a new line. An empty list excludes nothing.
Polling ignores commits with certain messages
- Excluded Messages
If set and Jenkins is set to poll for changes, Jenkins will ignore any revisions committed with message matched to the regular expression pattern when determining if a build needs to be triggered. This can be used to exclude commits done by the build itself from triggering another build, assuming the build server commits the change with a distinct message. You can create more complex patterns using embedded flag expressions.
Strategy for choosing what to build
When you are interested in using a job to build multiple branches, you can choose how Jenkins chooses the branches to build and the order they should be built.
This extension point in Jenkins is used by many other plugins to control the job as it builds specific commits. When you activate those plugins, you may see them installing a custom build strategy.
- Ancestry
- Maximum Age of Commit
The maximum age of a commit (in days) for it to be built. This uses the GIT_COMMITTER_DATE, not GIT_AUTHOR_DATE
- Commit in Ancestry
If an ancestor commit (SHA-1) is provided, only branches with this commit in their history will be built.
- Default
Build all the branches that match the branch name pattern.
- Inverse
Build all branches except for those which match the branch specifiers configure above. This is useful, for example, when you have jobs building your master and various release branches and you want a second job which builds all new feature branches. For example, branches which do not match these patterns without redundantly building master and the release branches again each time they change.
Merge Extensions
The git plugin can optionally merge changes from other branches into the current branch of the agent workspace. Merge extensions control the source branch for the merge and the options applied to the merge.
Merge before build Publisher post-build action is selected.
- Name of repository
Name of the repository, such as origin, that contains the branch. If left blank, it’ll default to the name of the first repository configured.
- Branch to merge to
The name of the branch within the named repository to merge to, such as master.
- Merge strategy
Merge strategy selection. Choices include:
default
resolve
recursive
octopus
ours
subtree
recursive_theirs
- Fast-forward mode
--ff: fast-forward which gracefully falls back to a merge commit when required
-ff-only: fast-forward without any fallback
--no-ff: merge commit always, even if a fast-forward would have been allowed
Custom user name/e-mail address
- user.name
Defines the user name value which git will assign to new commits made in the workspace. If given, the environment variables
GIT_COMMITTER_NAMEand
GIT_AUTHOR_NAMEare set for builds and override values from the global settings.
- user.email
Defines the user email value which git will assign to new commits made in the workspace. If given, the environment variables
GIT_COMMITTER_EMAILand
GIT_AUTHOR_EMAILare set for builds and override values from the global settings.
Deprecated Extensions
Custom SCM name - Deprecated
Unique name for this SCM. Was needed when using Git within the Multi SCM plugin. Pipeline is the robust and feature-rich way to checkout from multiple repositories in a single job.
Submodule Combinator - Removed
An experiment was created many years ago that attempted to create combinations of submodules within the Jenkins job. The experiment was never available to Freestyle projects or other legacy projects like multi-configuration projects. It was visible in Pipeline, configuration as code, and JobDSL.
The implementation of the experiment has been removed. Dependabot and other configuration tools are better suited to evaluate submodule combinations.
There are no known uses of the submodule combinator and no open Jira issues reported against the submodule combinator. Those who were using submodule combinator should remain with git plugin versions prior to 4.6.0.
The submodule combinator ignores any user provided value of the following arguments to git’s
- doGenerateSubmoduleConfigurations
A boolean that is now always set to
false. Submodule configurations are no longer evaluated by the git plugin.
- submoduleCfg
A list of submodule names and branches that is now always empty. Submodule configurations are no longer evaluated by the git plugin.
Previous Pipeline syntax looked like this:
checkout([$class: 'GitSCM', branches: [[name: 'master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: '']]])
Current Pipeline Syntax looks like this:
Environment Variables
The git plugin assigns values to environment variables in several contexts. Environment variables are assigned in Freestyle, Pipeline, Multibranch Pipeline, and Organization Folder projects.
Branch Variables
- GIT_BRANCH
Name of branch being built including remote name, as in
origin/master
- GIT_LOCAL_BRANCH
Name of branch being built without remote name, as in
master
Commit Variables
- GIT_COMMIT
SHA-1 of the commit used in this build
- GIT_PREVIOUS_COMMIT
SHA-1 of the commit used in the preceding build of this project
- GIT_PREVIOUS_SUCCESSFUL_COMMIT
SHA-1 of the commit used in the most recent successful build of this project
System Configuration Variables
- GIT_URL
Remote URL of the first git repository in this workspace
- GIT_URL_n
Remote URL of the additional git repositories in this workspace (if any)
- GIT_AUTHOR_EMAIL
Author e-mail address that will be used for new commits in this workspace
- GIT_AUTHOR_NAME
Author name that will be used for new commits in this workspace
- GIT_COMMITTER_EMAIL
Committer e-mail address that will be used for new commits in this workspace*
- GIT_COMMITTER_NAME
Committer name that will be used for new commits in this workspace
Token Macro Variables
Some Jenkins plugins (like email extension, build name setter, and description setter) allow parameterized references to reformat the text of supported variables. Variables that support parameterized references to reformat their text are called "token macros". The git plugin provides token macros for:
- GIT_REVISION
Expands to the Git SHA1 commit ID that points to the commit that was built.
- length
integer length of the commit ID that should be displayed.
${GIT_REVISION}might expand to
a806ba7701bcfc9f784ccb7854c26f03e045c1d2, while
${GIT_REVISION,length=8}would expoand to
a806ba77.
- GIT_BRANCH
Expands to the name of the branch that was built.
- all
boolean that expands to all branch names that point to the current commit when enabled. By default, the token expands to just one of the branch names
- fullName
boolean that expands to the full branch name, such as
remotes/origin/masteror
origin/master. Otherwise, it expands to the short name, such as
master.
The most common use of token macros is in Freestyle projects. Jenkins Pipeline supports a rich set of string operations so that token macros are not generally used in Pipelines.
When used with Pipeline, the token macro base values are generally assigned by the first checkout performed in a Pipeline. Subsequent checkout operations do not modify the values of the token macros in the Pipeline.
Properties
Some git plugin settings can only be controlled from command line properties set at Jenkins startup.
Default Timeout
The default git timeout value (in minutes) can be overridden by the
org.jenkinsci.plugins.gitclient.Git.timeOut property (see JENKINS-11286). The property should be set on the controller and on all agents to have effect (see JENKINS-22547).
Command line git is the reference git implementation in the git plugin and the git client plugin. Command line git provides the most functionality and is the most stable implementation. Some installations may not want to install command line git and may want to disable the command line git implementation. Administrators may disable command line git with the property
org.jenkinsci.plugins.gitclient.Git.useCLI=false.
Git Publisher
The Jenkins git plugin provides a "git publisher" as a post-build action. The git publisher can push commits or tags from the workspace of a Freestyle project to the remote repository.
The git publisher is only available for Freestyle projects. It is not available for Pipeline, Multibranch Pipeline, Organization Folder, or any other job type other than Freestyle.
Git Publisher Options
The git publisher behaviors are controlled by options that can be configured as part of the Jenkins job. Options include;
- Push Only If Build Succeeds
Only push changes from the workspace to the remote repository if the build succeeds. If the build status is unstable, failed, or canceled, the changes from the workspace will not be pushed.
- Merge Results
If pre-build merging is configured through one of the merge extensions, then enabling this checkbox will push the merge to the remote repository.
- Force Push
Git refuses to replace a remote commit with a different commit. This prevents accidental overwrite of new commits on the remote repository. However, there may be times when overwriting commits on the remote repository is acceptable and even desired. If the commits from the local workspace should overwrite commits on the remote repository, enable this option. It will request that the remote repository destroy history and replace it with history from the workspace.
Git publisher tags options
The git publisher can push tags from the workspace to the remote repository. Options in this section will allow the plugin to create a new tag. Options will also allow the plugin to update an existing tag, though the git documentation strongly advises against updating tags.
- Tag to push
Name of the tag to be pushed from the local workspace to the remote repository. The name may include Jenkins environment variables or may be a fixed string. For example, the tag to push might be
$BUILD_TAG,
my-tag-$BUILD_NUMBER,
build-$BUILD_NUMBER-from-$NODE_NAME, or
a-very-specific-string-that-will-be-used-once.
- Tag message
If the option is selected to create a tag or update a tag, then this message will be associated with the tag that is created. The message will expand references to Jenkins environment variables. For example, the message
Build $BUILD_NUMBER tagged on $NODE_NAMEwill use the message
Build 1 tagged on special-agentif build 1 of the job runs on an agent named 'special-agent'.
- Create new tag
Create a new tag in the workspace. The git publisher will fail the job if the tag already exists.
- Update new tag
Modify existing tag in the workspace so that it points to the most recent commit. Many git repository hosting services will reject attempts to push a tag which has been modified to point to a different commit than its original commit. Refer to force push for an option which may force the remote repository to accept a modified tag. The git documentation strongly advises against updating tags.
- Tag remote name
Git uses the 'remote name' as a short string replacement for the full URL of the remote repository. This option defines which remote should receive the push. This is typically
origin, though it could be any one of the remote names defined when the plugin performs the checkout.
Git publisher branches options
The git publisher can push branches from the workspace to the remote repository. Options in this section will allow the plugin to push the contents of a local branch to the remote repository.
- Branch to push
The name of the remote branch that will receive the latest commits from the agent workspace. This is usually the same branch that was used for the checkout
- Target remote name
The short name of the remote that will receive the latest commits from the agent workspace. Usually this is
origin. It needs to be a short name that is defined in the agent workspace, either through the initial checkout or through later configuration.
- Rebase before push
Some Jenkins jobs may be blocked from pushing changes to the remote repository because the remote repository has received new commits since the start of the job. This may happen with projects that receive many commits or with projects that have long running jobs. The
Rebase before pushoption fetches the most recent commits from the remote repository, applies local changes over the most recent commits, then pushes the result. The plugin uses
git rebaseto apply the local changes over the most recent remote changes.
Because
Rebase before pushis modifying the commits in the agent workspace after the job has completed, it is creating a configuration of commits that has not been evaluated by any Jenkins job. The commits in the local workspace have been evaluated by the job. The most recent commits from the remote repository have not been evaluated by the job. Users may find that the risk of pushing an untested configuration is less than the risk of delaying the visibility of the changes which have been evaluated by the job.
Combining repositories
A single reference repository may contain commits from multiple repositories. For example, if a repository named
parent includes references to submodules
child-1 and
child-2, a reference repository could be created to cache commits from all three repositories using the commands:
$ mkdir multirepository-cache.git $ cd multirepository-cache.git $ git init --bare $ git remote add parent $ git remote add child-1 $ git remote add child-2 $ git fetch --all
Those commands create a single bare repository with the current commits from all three repositories. If that reference repository is used in the advanced clone options clone reference repository, it will reduce data transfer and disc use for the parent repository. If that reference repository is used in the submodule options clone reference repository, it will reduce data transfer and disc use for the submodule repositories.
Bug Reports
Report issues and enhancements in the Jenkins issue tracker.
Contributing to the Plugin
Refer to contributing to the plugin for contribution guidelines. Refer to plugin development priorities for the prioritized list of development topics.
Remove Git Plugin BuildsByBranch BuildData Script
The git plugin has an issue (JENKINS-19022) that sometimes causes excessive memory use and disc use in the build history of a job. The problem occurs because in some cases the git plugin copies the git build data from previous builds to the most recent build, even though the git build data from the previous build is not used in the most recent build. The issue can be especially challenging when a job retains a very large number of historical builds or when a job builds a wide range of commits during its history.
Multiple attempts to resolve the core issue without breaking compatibility have been unsuccessful. A workaround is provided below that will remove the git build data from the build records. The workaround is a system groovy script that needs to be run from the Jenkins Administrator’s Script Console (as in ). Administrator permission is required to run system groovy scripts.
This script removes the static list of BuildsByBranch that is stored for each build by the Git Plugin.
import hudson.matrix.* import hudson.model.* hudsonInstance = hudson.model.Hudson.instance jobNames = hudsonInstance.getJobNames() allItems = [] for (name in jobNames) { allItems += hudsonInstance.getItemByFullName(name) } // Iterate over all jobs and find the ones that have a hudson.plugins.git.util.BuildData // as an action. // // We then clean it by removing the useless array action.buildsByBranchName // for (job in allItems) { println("job: " + job.name); def counter = 0; for (build in job.getBuilds()) { // It is possible for a build to have multiple BuildData actions // since we can use the Mulitple SCM plugin. def(); counter++; } } if (job instanceof MatrixProject) { def runcounter = 0; for (run in build.getRuns()) { gitActions = run) } } run.actions.remove(action) run.actions.add(action) run.save(); runcounter++; } } } if (runcounter > 0) { println(" -->> cleaned: " + runcounter + " runs"); } } } if (counter > 0) { println("-- cleaned: " + counter + " builds"); } } | https://plugins.jenkins.io/git | CC-MAIN-2022-05 | refinedweb | 6,594 | 54.52 |
When you're documenting an API, provide a complete API reference, typically generated from source code using doc comments that describe all public classes, methods, constants, and other members.
Use the basic guidelines in this document as appropriate for a given programming language. This document doesn't specify how to mark up doc comments; for more information, refer to the specific style guide for each programming language.
This page also doesn't cover web APIs. The style suggestions below may be useful to keep in mind while documenting web APIs, but this page doesn't discuss how to write about resources or collections.
Documentation basics
The API reference must provide a description for each of the following:
Every class, interface, struct, and any other similar member of the API (such as union types in C++).
Every constant, field, enum, typedef, etc.
Every method, with a description for each parameter, the return value, and any exceptions thrown.
The following are extremely strong suggestions. In some cases, they don't make sense for a particular API or in a specific language, but in general, follow these guidelines:
On each unique page (for a class, interface, etc.), include a code sample (~5-20 lines) at the top.
Put all API names, classes, methods, constants, etc. in code font, and link each name to the corresponding reference page. Most document generators do this automatically for you.
Put string literals in code font, and enclose them in double quotation marks. For example, XML attribute values might be
"wrap_content"or
"true".
Put parameter names in italic. For example, when you refer to the parameters of a method like
doSomething(Uri data, int count), italicize the names data and count.
Make sure that the spelling of a class name in documentation matches the spelling in code, with capital letters and no spaces (for example,
ActionBar).
Don't make class names plural (
Intents,
Activities); instead, add a plural noun (
Intentobjects,
Activityinstances).
However, if a class has a name that's a common term, you can refer to it with the corresponding English word, in lowercase and not in code font (activities, action bar).
Classes, interfaces, structs
In the first sentence of a class description, briefly state the intended purpose or function of the class or interface with information that can't be deduced from the class name and signature. In additional documentation, elaborate on how to use the API, including how to invoke or instantiate it, what some of the key features are, and any best practices or pitfalls.
Many doc tools automatically extract the first sentence of each class description for use in a list of all classes, so make the first sentence unique and descriptive, yet short. Additionally:
Do not repeat the class name in the first sentence.
Do not say "this class will/does ...."
Do not use a period before the actual end of the sentence, because some doc generators naively terminate the "short description" at the first period. For example, some generators terminate the sentence if they see "e.g.", so use "for example" instead.
The following example is the first sentence of the description for Android's
ActionBar
class:
A primary toolbar within the activity that may display the activity title, application-level navigation affordances, and other interactive items.
Make descriptions for members (constants and fields) as brief as possible. Be sure to link to relevant methods that use the constant or field.
For example, here's the description for the
ActionBar class's
DISPLAY_SHOW_HOME
constant:
Show 'home' elements in this action bar, leaving more space for other navigation elements. This includes logo and icon.
See also:
setDisplayOptions(int),
setDisplayOptions(int, int)
Methods
In the first sentence for a method description, briefly state what action the method performs. In subsequent sentences, explain why and how to use the method, state any prerequisites that must be met before calling it, give details about exceptions that may occur, and specify any related APIs.
Document any dependencies (such as Android permissions) that are needed to call the method, and how the method behaves if such a dependency is missing (for example, "the method throws a SecurityException" or "the method returns null").
For example, here's the description for Android's Activity.isChangingConfigurations() method:
Checks().
Use present tense for all descriptions—for example:
Adds a new bird to the ornithology list.
Returns a bird.
Description
If a method performs an operation and returns some data, start the description with a verb describing the operation—for example:
- Adds a new bird to the ornithology list and returns the ID of the new entry.
If it's a "getter" method and it returns a boolean, start with "Checks whether ...."
If it's a "getter" method and it returns something other than a boolean, start with "Gets the ...."
If it has no return value, start with a verb like one of the following:
Turning on an ability or setting: "Sets the ...."
Updating a property: "Updates the ...."
Deleting something: "Deletes the ...."
Registering a callback or other element for later reference: "Registers ...."
For a callback: "Called by ...." (Usually for a method that's named starting with "on", such as
onBufferingUpdate.) For example, "Called by Android when ...." Then, later in the description: "Subclasses implement this method to ...."
If it's a convenience method that constructs the class object, start with "Creates a ...."
Parameters
For parameter descriptions, follow these guidelines:
Capitalize the first word, and end the sentence or phrase with a period.
Begin descriptions of non-boolean parameters with "The" or "A" if possible:
The ID of the bird you want to get.
A description of the bird.
For boolean parameters for requesting an action, start sentences with "If true ..." and "If false ...."—for example:
- If true, turn traffic lines on. If false, turn them off.
For boolean parameters for setting the state of something (not making a request), use the format "True if ...; false otherwise."—for example:
- True if the zoom is set; false otherwise.
In this context, don't put the words "true" and "false" in code font or quotation marks.
For parameters with default behavior, explain what the behavior is for each value or range of values, and then say what the default value is. Use the format Default: to explain the default value.
Return values
Be as brief as possible in the return value's description; put any detailed information in the class description.
If the return value is anything other than a boolean, start with "The ...."—for example:
- The bird specified by the given ID.
If the return value is a boolean, use the format "True if ...; false otherwise."—for example:
- True if the bird is in the sanctuary; false otherwise.
Exceptions
In languages where the reference generator automatically inserts the word "Throws", begin your description with "If ...":
- If no key is assigned.
Otherwise, begin with "Thrown when ...":
- Thrown when no key is assigned.
Deprecations
When something is deprecated, tell the user what to use as a replacement. (If you track your API with version numbers, mention which version it was first deprecated in.)
Only the first sentence of a description appears in the summary section and index, so put the most important information there. Subsequent sentences can explain why it was deprecated, along with any other information that's useful for a developer using your API.
If a method is deprecated, tell the reader what to do to make their code work.
Examples
Deprecated. Use #CameraPose instead.
Deprecated. Access this field using the
getField()method. | https://developers.google.cn/style/api-reference-comments | CC-MAIN-2021-49 | refinedweb | 1,248 | 56.25 |
In this article, we will look at the checks a developer can incorporate in his application to check whether the device on which the application is running is jailbroken or not. Checking whether a device is jailbroken or not can have many advantages for your application. As we have already seen, an attacker can run tools like Cycript, GDB, Snoop-it etc to perform runtime analysis and steal sensitive data from within your application. If you are really looking to add an extra layer of security for your application, you should not allow your application to be run on a jailbroken device. Please note that millions of users jailbreak their devices and hence not allowing an application to be run on a jailbroken device could have a significant impact on your user base. Another thing you can do is instead block some of the features in your application rather than disabing it entirely. We will also look at how hackers can bypass the check for jailbreak detection in your application using Cycript.
Once a device is jailbroken, a lot of other files and applications are installed on the devcice. Checking for these files in the filesystem can help us identify whether the device is jailbroken or not. For e.g, most of the jailbreak softwares install Cydia on the device after jailbreaking. Hence just a simple check for the file path of Cydia can determine whether the device is jailbroken or not.
NSString *filePath = @"/Applications/Cydia.app"; if ([[NSFileManager defaultManager] fileExistsAtPath:filePath]) { //Device is jailbroken }
However, not all devices that are jailbreaked have Cydia installed on them. In fact, most hackers can just change the location of the Cydia App. Checking for many other files related to Jailbroken devices can make this method much more efficient. For e.g, one can check if Mobile Substrate is installed on the device or not, which many applications require to run on a jailbroken device. One can also check for the location of the SSH Daemon, or the shell interpreter. Combining all these checks, we get a method like this.
+(BOOL)isJailbroken{; } return NO; }
We have also learnt from the previous articles that applications that run as a mobile user run in a sandboxed environment and go inside the directory /var/mobile/Applications whereas applications that run with the root user (e.g Apple’s preloaded applications) aren’t subject to any sandbox environment and go inside the directory /Applications. A user running a jailbroken device can install your application in the /Applications folder thereby giving it root privileges. Hence, adding a check to see whether the application follows sandboxing rules can help the user identify whether the application is jailbroken or not. A good way to check for it would be to see if we can modify a file in some other location outside the application bundle. { //Device is not jailbroken [[NSFileManager defaultManager] removeItemAtPath:@"/private/jailbreak.txt" error:nil]; }
We know that a skilled hacker can just modify the location of the application. However, we know that 80% or more of the devices that are jailbroken have Cydia installed on them, and even if the hacker can change the location of the Cydia app, he most probably won’t change the URL scheme with which the Cydia app is registered. If calling the Cydia’s URL scheme (cydia://) from your application gives a success, you can be sure that the device is jailbroken.
if([[UIApplication sharedApplication] canOpenURL:[NSURL URLWithString:@"cydia://package/com.example.package"]]){ //Device is jailbroken }
Let’s also add a condition to make sure this code does not execute if we are testing our application on a simulator and not an actual device. After combining all the above techniques, our method looks like this.
+(BOOL)isJailbroken{ #if !(TARGET_IPHONE_SIMULATOR) { [[NSFileManager defaultManager] removeItemAtPath:@"/private/jailbreak.txt" error:nil]; } if([[UIApplication sharedApplication] canOpenURL:[NSURL URLWithString:@"cydia://package/com.example.package"]]){ //Device is jailbroken return YES; } #endif //All checks have failed. Most probably, the device is not jailbroken return NO; }
Honestly speaking, there is no foolproof method of detecting jailbroken devices. A skilled hacker will always find a way to bypass these checks. He can simply find the instructions in the binary and replace all instructions with No-op. He can also swizzle your method implementation with his own using Cycript.
He can first find the class information of the application using Class-dump-z. Over here, he can see a method named +(BOOL)isJailbroken in the JailbreakDetector class. Note that it is a class method as it begins with positive sign. It obviously means this method checks whether a device is jailbroken or not and returns YES if the device is jailbroken. If you are not getting any of this, you should consider reading previous articles.
He can then hook into this application using Cycript.
And then print out all the methods for the JailbreakDetector class. Please note that we are using JailbreakDetector->isa.messages because isJailbroken is a class method. To find the instance methods, just using JailbreakDetector.messages would have worked for us.
And then he can swizzle the method implementation with his own that always returns a NO. If you are not getting this, i suggest that you read the article on Method Swizzling.
As a developer, what we can do is change the method name to something that doesn’t look quite appealing to the hacker. For e.g, the className JailbreakDetector could be renamed as ColorAdditions and the method +(BOOL)isJailbroken could be replaced by +(BOOL)didChangeColor with the implementation being the same. Something like this wouldn’t attract the attention of the hacker. He can always look at the calls that are being made inside this method using Snoop-it, GDB etc, but a small change like this can surely help in confusing him.
Nice explanation thanks
In regards to submitting apps to the app store with this code:
The App Store Review Guidelines state that “Apps that read or write data outside its designated container area will be rejected”.
Do Apple see this code as not adhering to their guidelines or do they see this as a good prevention measure?
Really good article.
Just 2 additions:
1. When you are writing a string into the jailbreak.txt file, you have a typo in your error handling. Remove the “&” inside “…encoding:NSUTF8StringEncoding error:&error];” It should be “…encoding:NSUTF8StringEncoding error:&error];”
2. In your if…else block when you are checking for an error upon writing jailbreak.txt into the file system, your else block makes no sense. You need to remove the file if there was NO error. Right now you are removing the file if there was an error.
What do you think?
Cheers
MS II.
Jailbreak and antic rack calls should not be Objective-C but straight C (or should exist and be used in both C and Objective-C forms). This will make it harder to do Objective-C method swizzling and class-dump etc. Also try and use standard C library calls in addition to the NS Frameworks for things like file checks. Doing them twice using both methods makes it harder to stamp out. Not perfect but makes things a bit harder.
Many thank for your article ! | http://resources.infosecinstitute.com/ios-application-security-part-23-jailbreak-detection-evasion/ | CC-MAIN-2014-52 | refinedweb | 1,206 | 55.54 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Quick Ideas to Improve Your People Skills
151
This page intentionally left blank
Quick Ideas to Improve Your People Skills Bob Dittmer and 151 Stephanie McFarland Franklin Lakes. NJ .
BF637. The Career Press. NJ 07417 www. PO Box 687. ISBN 978-1-60163-037-7 1. 1950– 151 quick ideas to improve your people skills / by Robert E. I. 151 Q UICK I DEAS TO I MPROVE Y OUR P EOPLE S KILLS EDITED BY KARA REYNOLDS TYPESET BY MICHAEL FITZGIBBON Cover design by Jeff Piasky Printed in the U.A.C45D583 2009 158.careerpress. please call toll-free 1-800-CAREER-1 (NJ and Canada: 201-848-0310) to order using VISA or MasterCard. Franklin Lakes. in whole or in part. The Career Press. or for further information on books from Career Press. Title: One hundred fifty-one quick ideas to improve your people skills.. Includes index.Copyright © 2009 by Robert E. III. by Book-mart Press To order this title. Title.com Library of Congress Cataloging-in-Publication Data Dittmer. Interpersonal communication. This book may not be reproduced. including photocopying. recording. in any form or by any means electronic or mechanical. 1968– II. McFarland. cm. or by any information storage and retrieval system now known or hereafter invented. Interpersonal relations. p. Dittmer and Stephanie McFarland All rights reserved under the Pan-American and International Copyright Conventions.2'6--dc22 2008035812 . 3 Tice Road. Dittmer and Stephanie McFarland. Robert E.S. 2. Inc. without written permission from the publisher. Stephanie..
Believe That All People Start With Good Intentions 13. Relationships Are Priority 6. Social Intelligence vs. Behave in a Way That Secures Relationships 9. Don’t Ingratiate 11. Envision What You Want From Your Relationships 2 4 8. Look for Ways to Serve Others 10. Give ’Em the Benefit of the Doubt 25 26 27 28 29 30 . Be Socially Aware 5. Why Interpersonal Skills Are So Important 2. Technical Knowledge 4.Chapter Title Here Please 5 Contents How to Use This Book Introduction 1. Apply the Pygmalion Effect 12. The Nature of Your Relationships 13 15 17 18 19 21 22 23 7. People Don’t Care How Much You Know Until They Know How Much You Care 3.
6
151 Quick Ideas for Start-Up Entrepreneurs
31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
14. Live by the Golden Rule 15. Practice the Platinum Rule 16. Always Look Toward Solutions 17. Have Reasonable Expectations of Yourself 18. Have Reasonable Expectations of Others 19. Be Principle-Centered 20. Allow Others to Hold to Their Principles 21. Set Boundaries 22. Defend Your Boundaries 23. Be Genuine 24. Don’t Take Yourself Too Seriously 25. Have a Sense of Humor 26. Laugh at Yourself 27. Cherish Your Goofs 28. Social Skills Are Always a Work in Progress 29. Your Character—and Your Reputation— Is Your Calling Card 30. Be Authentic 31. Act With Integrity 32. Build Trust 33. Keep Your Word 34. Be Straight Up 35. View Discernment as a Gift 36. Always Show Respect
Chapter Title Here Please
37. Practice Tolerance 38. Choose Words Carefully 39. Words: I vs. We 40. Use Kind Words 41. Don’t Kill Relationships With Your Behavior 42. Do Not Gossip 43. Don’t Be Dismissive 44. Don’t Be Condescending 45. Don’t Be Manipulative 46. Don’t Make Assumptions 47. Don’t Be Pessimistic 48. Don’t Be a Cynic 49. Don’t Be Over-Reactive 50. Don’t Be Domineering 51. Don’t Be Overly Opinionated 52. Don’t Be Overly Aggressive 53. Help Others Grow 54. Believe in Others 55. Wage Peace in Your Relationships 56. Be a Peacemaker Between Friends 57. Respect Different Personality Types 58. Understand Different Styles 59. Recognize That Styles Differ From Opinions 60. Know Your Own Style 54 55 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 71 72 73 74 75 76 77
7
8
151 Quick Ideas for Start-Up Entrepreneurs
78 79 80 81 82 84 85 86 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101
61. Stretch Beyond Your Style 62. Embrace Different Styles 63. Determine if You Are Shy 64. Overcome Shyness 65. Overcome Feeling Inferior 66. Overcome Feeling Intimidated 67. Don’t Be Too Talkative 68. Listen, Don’t Talk 69. Get Out of Your Own Way 70. Douse the Domineering 71. Don’t Be Reactive 72. Tackle the Intimidator 73. Strive for Live Interaction 74. Practice Face-to-Face Communication 75. At Least Make It Live 76. Beware of E-mail 77. Remember That People Are Creatures of Emotion 78. Fill the Emotional Bank Account 79. Make Friends 80. Develop Your Emotional Intelligence 81. Remember Names 82. Look ’Em in the Eye 83. Give Your Undivided Attention 84. Be “Present”
See Both Sides 90. Edify. Be Careful With Your Opinions 88. Keep Honest Company 107. Call on Your Support Group 106. Edify 102 104 105 106 107 109 9 91. Fill Your Own Emotional Bank Account 104. Help Others Be Understood 94. Help Others Be Heard 93. Practice Good Listening 86. and Such 103. Feed Your Own Needs 105. Be a Cheerleader 99. Help Others Achieve Their Goals 100. Withhold Judgment 89. Encourage 96. Find Friends Who Edify You in Your Absence 111 112 113 114 115 117 118 119 120 122 123 124 125 126 127 128 129 . Allow People to Save Face 95. Look for Reasons to Celebrate 102. Encourage With Words and Perspective 97. Let Others Shine 101. Remember Birthdays. Connect With People Through Questions 87. Get Inspired 108. Anniversaries. Give Honesty With an Equal Dose of Compassion 110 92. Edify.Chapter Title Here Please 85. Pat Others on the Back 98.
Sharpen the Saw by Sharpening Your Mind 112. Get Away From Your Desk for Lunch 113.10 151 Quick Ideas for Start-Up Entrepreneurs 130 131 133 134 135 136 137 138 140 141 142 143 144 145 146 147 148 149 150 151 153 154 109. Attend Social Events 114. Bring the Peace Pipe 126. See Rough Starts as an Opportunity 119. Learn to Eat Crow 125. They Can Be a Path to the Dark Side 129. Find a Class Act to Follow 110. Don’t Take Things Personally 130. Having Your Say Doesn’t Mean Always Having Your Way 124. Breathe! 120. Can’t We All Just Get Along? 116. Don’t Make Things Personal . Handle Conflict With Confidence 115. 365 Opportunities for Conflict— 366 in a Leap Year 117. Fight Fair 128. Have the Difficult Conversations Beforehand 122. Give Yourself a Pep Talk 121. See Conflict or Disagreement as an Opportunity 118. Take a “People Break” 111. Break Bread 127. Handle Conflict One-on-One 123. Be Mindful of Your Thoughts.
Give People Space 135. Every Difficult Relationship Has Lessons Index About the Authors 11 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 174 175 176 179 187 . Get Clear 140. Watch Your Body Language—It Speaks Volumes 134. What Goes Over the Devil’s Back. Be the First to Offer the Olive Branch— or the Peace Pipe 151. Don’t Persuade 141. Always Comes Under His Belly 136. Pick Your Battles 147. Fight for the Relationship 139. Look for Middle Ground 143. Mend Fences 148. Don’t Tell 142. There Is No Right or Wrong 137.Chapter Title Here Please 131. Present. Dial Down the Volume 133. Winner Never Takes All 138. Put the “Moose on the Table” 146. Forgive Yourself for Failings 149. Keeps His Life 132. He Who Keeps His Mouth Shut. Start From a Point of Commonality 144. Forgive Others as Well 150. Ask. Some Nuts Are Worth Cracking 145.
This page intentionally left blank .
in small bites. Read the book quickly through to gain a quick impression of the ideas here. The ideas come from the collected experiences and wisdom of hundreds of people—well beyond just the authors. The book is designed to be consumed piecemeal—that is.How to Use This Book Every quick idea in this book is tested and true.. So don’t try all of these ideas at once.. And so on. As your situation changes you may well find usable ideas that you discounted earlier. and try them out. routinely go back and review the others. And they are presented here to help you learn how to better create lasting relationships with others through improving your people skills. revisit this book for some new ideas or techniques. 13 . Some of these ideas are in sequence. Later.. and pick a few more to try. and those will make logical sense to you when you read them. and then start picking out those that seem to be immediately helpful. Every 90 days or so.
and they can work for you! 14 . They have worked for others.151 Quick Ideas to Improve Your People Skills Remember. all of these ideas and concepts are proven techniques—proven by research and other professionals around the country and around the world.
often. how to build goodwill and emotional equity with people. The tips and insights shared in this book cover four key areas of people skills: understanding why your social intelligence is critical to your career success. And to be effective in both. but are you familiar with the Platinum Rule? Do you know how 15 . or you’re looking for ways to handle an ongoing conflict with a coworker. more times than we would like. and they are a fact of work. understanding your own interaction style (and how it affects others).Introduction Have you ever found yourself saying. 151 Quick Ideas to Improve Your People Skills can help. and how to manage conflict—and thrive through it! For example. But people are a fact of life. “Work would be great if it weren’t for the darn people”? Yeah. you’ve probably been taught the Golden Rule. Whether you just need to tweak your approach for making connections with people in the workplace. And that’s where this book comes in. It is your comprehensive source for building better working—and personal—relationships. we’ve all felt that way from time to time—and. you have to learn to deal with them—effectively.
16 . much more. In short. filled with fun. and practical ideas to which you can relate.151 Quick Ideas to Improve Your People Skills powerful the Pygmalion Effect can be in working with people? Do you approach people as creatures of logic. So dig in and enjoy! And start learning what it takes to build better working— and personal—relationships. and much. It gives you a full-spectrum approach to dealing with people in just about every situation—and how to get back on track when you fall off the “people skills” wagon. or emotion? Do you know how to set boundaries? Do you have reasonable expectations of yourself when dealing with others? 151 Quick Ideas to Improve Your People Skills covers these topics. this book is an excellent guide. relevant.
The reward from good working relationships goes beyond the office. 1 17 . and nurtured those relationFor example. When we have good relationships at work.” Our interactions with people are the signatures of our lives. and that includes our careers.Quick Idea 1 Why Interpersonal Skills Are So Important A full litany of proverbs Assignment exist across continents and cultures that tell us that our inBefore you read on. we have to deal with people. from how much energy we have when we get home at night to the attitudes we bring home to our families. conteractions with people mean sider the people with whom more than anything else we do you interact on a daily or in this life. and we have to deal with them effectively to be successful in our work. In fact. considering we spend a minimum of eight hours of every day on the job. What is emotional intelligence (EQ) your relationship with them? with people is more important Have you carefully cultivated than our IQ. until they know how much you care. too. it affects the rest of our lives. of course—most particularly our work relationships. For us to be effective in our jobs. They are creatures of emotion. It’s one big circular package. both personally Daniel Goleman says that our and professionally. you’ve probships? Do you feel good about ably heard this one: “People those relationships? don’t know how much you know.” And as the great Dale Carnegie once said: “People are not creatures of logic. author weekly basis.
weak relationships in which you don’t sense that the other person really cares? 18 . We see this daily with students at a university where I teach. They know it’s the currency that buys more reward in life than any gold coin or greenback ever produced. Students walk into the classroom wanting to have an academic. To paraphrase a popular advertisement: You need people. After all. Do you recognize people with whom you have good. but having a network of solid relationships is even more important. 2 Assignment Think about your own experiences with others. they often need to understand that you care about them. and they know that EQ is more important than IQ. solid relationships that were developed because you discovered they really cared about you? Did that lead you to reciprocate? Do you have other. people first need to care about you. learning relationship with the professor.151 Quick Ideas to Improve Your People Skills People with solid interpersonal skills know how to build effective relationships. relationships need to be reciprocal to be effective. And in order to care about you. Epilogue Who you know is important. People Don’t Care How Much You Know Until They Know How Much You Care In developing relationships.
The students benefit by knowing that the professor is there to be a part of their learning process in a personal way. and doing more than the minimum in class to be successful. The good professors find ways to communicate to students early on that they truly care about the students’ successes. It makes for a superb learning environment where everyone benefits—even the professor. few of us put specific focus on developing our people skills.Quick Ideas 1 to 3 but are often not sure whether the professor really cares about them or is just there to get the lecture in and go back to researching. These students are often great advocates for the university and the programs the professor teaches. why should they care about you? Social Intelligence vs. We’ve seen many professor/student relationships last years. even after college is done and the student is off to a profession. e-mailing them with thoughts and ideas. If you don’t care about them. Technical Knowledge Though we spend the vast majority of our lives developing our technical capabilities to make us attractive in the job market. not just as a role or function. Those that do so find students engaging them before and after class. Epilogue Important and effective relationships are built on a foundation of interest and concern for the other party. 3 19 . which is much better than having lumps sit in the classroom and merely listen. The professor benefits by having students engaged and involved.
you must be socially intelligent. circumstances. they require so much more. and influence others in a way that moves themselves and others forward. and the ugly. we don’t rate yourself on each factor and work in a vacuum. Think about it for a moment. are confident. which means being aware of who you are—the good. solve problems. show leadership. After all. They also want employees who are socially sensitive. To be successful today. Just about anyone can learn technical skills associated with his or her area of interest. But these do not have opinions. with other people. and fine-tune throughout your life. having technical expertise is not enough. such as how to work with a specific business or industrial machine. It also means knowing how to manage yourself—your energy. This takes social intelligence. Although employers today certainly demand technical proficiency from their employees. theories. know who they are in terms of strengths and weaknesses. experiences. know how to build rapport. your emotions. success. In short. Technical skills require us to understand and implement concepts. Add practice to that knowledge and you get technical proficiency. the bad. in a Then make a plan to address myriad of situations and any shortcomings. values. and have a sense of energy when implementing the day-to-day. can adapt and flex with rapidly changing work environments. we work assess how well you measure up. They want people who can communicate. 20 . emotions—the things that make working with people both difficult and rewarding.151 Quick Ideas to Improve Your People Skills But it is the people Assignment skills—also known as social intelligence—that deterUsing the description of mine our overall long-term social intelligence given here. and tactical knowledge. The good news is that social intelligence is something you can develop and practice every day. And it means having the ability to see things from others’ perspectives and build relationships through all kinds of situations. and your reactions.
and influence others in a way that moves themselves and others forward. They choose with whom they will interact rather than have others. These networks exist on a formal and informal basis. such as an organization. Where dependent on social interacare the networks. who interact with designated others by their jobs and job descriptions. The organization dictates who interacts with whom. who work for specific supervisors. Be Socially Aware It is important for you to recognize that any good team Assignment or effective group of people is Look around you. As a consequence. An example is a group of friends from college who 4 21 . you them? Which ones are imporneed to understand that there tant to your success? How are always set patterns of intercan you join them? actions that we call networks. know how to build rapport. These networks are social in that people who interact in these networks are self-selected. choose for them. both formal tions—both personal and proand informal. Informal networks are social in nature. and can be the more important of the two types. Some are in organizations. Formal networks are those established by organizations: networks of employees who work together.Quick Ideas 3 to 4 Epilogue Employers today want employees who are socially sensitive. some are outside organizations. and who is in fessional.
So we will be continually talking about people skills as they lead to positive and mutually beneficial relationships in your life. 5 22 . or that chance to meet an important person in your profession. Epilogue These social networks are especially important because they set up relationships that can be helpful in the future. we’ve started Assignment talking about relationships. It is important to recognize these networks. for example) who meet monthly to talk about their profession. Relationships Are Priority Okay. Or it could be a group of like professionals (CPAs. Identify the people with and here’s why: Your whom you already have relationpeople skills lead to imporships in your personal and protant relationships that can fessional life.151 Quick Ideas to Improve Your People Skills meet occasionally to have lunch and exchange life experiences. identify those that are important to you. Are they good ones? help you in your personal Positive? Helpful to you? and professional lives. It is personal and professional relationships that make all the difference in hearing about that new job opportunity in another company. or that opportunity to meet someone who could be important to and in your future (a future spouse?). and then become part of those networks.
and some relationships fall in the middle. As you interact with people. Not all are simiwhere they fall on a continuum.Quick Ideas 4 to 6 Epilogue Relationships are the social interactions that make societies function effectively. The Nature of Your Relationships When you think of the people around you. are they? from solid rapport to recently Some people you have rapintroduced. you’ll be better able to apply the tips and techniques described in the pages ahead. or stall. you have to remember that relationships are also subject to the situations and circumstances of the moment. larly developed. but in the afternoon a poorly handled disagreement could alter that relationship. a key relationship could be fine in the morning. Assignment particularly at work. And you’ll be able to keep them moving in a positive direction for the future. some people you’re just getting to know. Gain them and maintain them. 6 23 . port with. If you take stock now of where each of your key relationships stand. think Make a list of your key relaof the level of those relationships and then determine tionships. For example. How well you handle them together—the nature of the relationship and the current circumstances—determines how well they will progress.
would you what you want to achieve as like to build a better rapport you work through circumwith someone you just met? Is stances and situations that can this someone with whom you affect the relationship through will be working on a regular time. think about where you want Take your list from Idea to take each relationship. Envision What You Want From Your Relationships As you consider the relaAssignment tionships you just listed (in Idea 6).151 Quick Ideas to Improve Your People Skills Epilogue Knowing the nature of your key relationships will help you know which people strategies are most effective when circumstances bring change. and a stronger rapport will help you both work more effectively? That’s a good goal! That’s what we call mutually beneficial— a stronger rapport can help you both. basis. and now set goals for each lyze your list and set relationrelationship. In short. 7 24 . And remember: A win-win is always the ultimate relationship goal. a better acquaintance and some coworker bonding will create a win-win: a situation in which you both gain from the relationship. ranging from ship goals. three to six months. This will help you remain focused on For example. Ana6.
a basic tenet of all relationships. which means you have to do a lot of things right in every interaction. and you have to hear them. 8 25 . Behave in a Way That Secures Relationships The best way to build Assignment and maintain a relationship is to apply both the Golden Look more closely at your Rule and the Platinum list of relationships. The list is long. And you need to maintain that trust to retain their respect. they will trust you. and the other those relationships now and requires that you treat oththrough time? Keep reading.Quick Ideas 6 to 8 Epilogue Set relationship goals that create a win-win for each party. you have to behave in a way that secures them. and what Rule. both fall into two basic categories: respect and trust. and let them be heard. ers as they want to be treated. You have to maintain people’s confidences. you have to allow them to save face. and it’s focused and tedious work some days (every day with some people). what to treat others as you want do you need to do to secure to be treated. One encourages you you want from each. But if you want solid relationships that last. Generally. If you treat others with respect. Now.
in time. you can serve others by being a coworker in whom to confide. 9 26 . Serving others is a quick way to start a relationship on a positive note. Yet.151 Quick Ideas to Improve Your People Skills Epilogue Securing relationships with people means you have to treat them in ways that demonstrate trust and respect with every interaction. serving others can be very rewarding. The following pages give you scores of ideas and opportunities to serve others—from helping them to be heard to letting them shine. the need has developed. In short. Make the offer. Idea 6). and nurture a developing one. Look for Ways to Serve Others Building solid relationships means finding ways to help othAssignment ers. know. How can you serve relationships are a dance of serthose people? Once you vice. a friend who inspires people. and. you have to Look at your list (from let others help you. or an office cheerleader who drives someone to achieve his or her goals. For example. Or you have to firmly adopt a serwait until the time is right or vice mentality. Discover what you can do to help those around you. in some simple ways. To be good with people. act.
10 27 . Ingratiating yourself with the boss or with others is simply not a positive step to any relationship. Others recognize it as an attempt to gain undue influence and discount you for it. None of them suggest a positive relationship. Suck-up. And by any Watch others who behave other word or set of words. You gain nothing. positive relationships. Teacher’s away from this behavior. Don’t Ingratiate We’ve given this one a nice Assignment word in the title. is it? Do you want to be perNot sure what I mean? ceived that way? If not. These are just some of the words and phrases used for the same thing—there are others inappropriate for this book. pet. And those who recognize that relationship will denigrate you for it. stay Brown-noser. What do you think something that becomes a sigof them? What is your attitude nificant barrier to solid and toward them? It isn’t positive. Instead. you know it as something else. one party has the power over the other. The person you have ingratiated yourself to thinks less of you. it’s this way.Quick Ideas 8 to 10 Epilogue Service to others is the strongest binding force in relationships.
I shall always be a flower girl to Professor Higgins.” 11 28 . In every interfor this psychological action. (yes. but how she’s treated. the difference between a lady and a flower girl is not how she behaves. and always will. or even cept. a horse!) in 1911. really and truly. Practice this long enough. Apply the Pygmalion Effect What? Isn’t that a musical? Well. but it’s a negative and counterproductive one. But I know I can be a lady to you because you always treat me as a lady. apart from the things anyone can pick up (the dressing and the proper way of speaking and so on). but the true credit your child or spouse. will get from people what you expect of people. George Assignment Bernard Shaw did base Think of one coworker with his musical on the conwhom you have difficulty. because he always treats me as a flower girl. treat him exactly as if he has theory actually goes to already become the person you wish two scholars and a horse him to be. In case that description is as clear as mud. and you’ll be pleasantly surprised by and it goes like this: You the results. and always will.151 Quick Ideas to Improve Your People Skills Epilogue The ability to ingratiate yourself with others might be a skill. we’ll let Shaw’s character Eliza Doolittle explain it: “You see.
Quick Ideas 10 to 12 If you believe the best of people. If you assume otherwise. Epilogue Use the Pygmalion Effect to your advantage: Believe the best of people. why should I trust him?” But if you maintain the assumption that all people are working from a point of good intentions—that they mean well—then 12 29 . assumption that everyone is operating with good intentions. Be they are opposed to you? If the latpositive and start with the ter. in turn. you will treat them as if they are the best. and expect the best from them. Do you think well of them motive for everything they until they demonstrate otherwise. That’s the Pygmalion Effect. they will likely give you exactly what you believe and expect.” Don’t fall into the trap of assuming Examine your approach to everyone has some ulterior people. And that leads to: “If he doesn’t trust me. That establishes an internal (and often external) reputation as a skeptic. And. and it’s been proven in psychological studies. you will be perceived as distrusting. Believe That All People Start With Good Intentions Repeat after me: “The Assignment glass is half full. consider a change of attitude. which is inimical to or do you automatically assume you and your goals. do.
Epilogue Most people want to behave appropriately. Don’t make assumptions.151 Quick Ideas to Improve Your People Skills you release yourself to think the best of people. How did you feel? How did Instead of looking first to they react? place blame. so do it for others. When things Think back on your own don’t go specifically as they experiences. and expect the best from them in return (see Idea 11). Hold off on thinking someone is the problem until you have established evidence. until you have well-founded reasons not to. look to give others the benefit of the doubt. always starting from the assumption that they have good intentions. Assume they mean well. and do want positive relationships. We tend to jump Have others done this to you? to conclusions. In a sense. we too often look to this to others in the past? place blame. Give ’Em the Benefit of the Doubt We are often too quick to Assignment judge the actions and motivations of others. So think positive thoughts about them. You want others to do this for you (see the Golden Rule). 13 30 . This is especially important when evaluating motivations. you manifest what you think about others. Have you done should.
31 . grandmother. The beauty of the Golden Rule is that it has no mystique to it—it’s straightforward and simple. Treat others as you want to be treated. And it’s the Golden Rule? If not. It means. 14 Epilogue It’s the Golden Rule because it’s the ultimate tenet of human relationships. Live by the Golden Rule We’re sure you remember this Assignment one: “Do unto others as you would have them do unto you. The rule remains the same whether you’re 8 or 80. Are you practicing or Sunday School teacher. if you want others to treat you with respect and courtesy. Reciprocal respectful treatment leads to solid relationships. an adage people across the world why not? embrace as a universal human truth. mother. ers.Quick Ideas 12 to 14 Epilogue Giving others the benefit of the doubt allows them to do the same for you.” Examine your own You may have learned this from relationships with othyour father. then you have to treat them in the same manner.
151 Quick Ideas to Improve Your People Skills Practice the Platinum Rule Now let’s learn about the Platinum Assignment Rule. you could stumble in some relationships. Tony To find out more about the Platinum Alessandra gets the Rule. you have to observe and listen to others.platinumrule. Dr. credit for this one.com. and then try to meet those needs. Where the Golden Rule focuses on you. a mutually beneficial relationship—one that serves both parties with mutual gains. You must also practice the Platinum Rule. visit www. a logical companion to the Golden Rule.” It means it’s important to learn how others want to be treated. you create a win-win situation. To practice this rule. In doing so. and then treat them as such. the Platinum Rule focuses on others. The rule states: “Do unto others as they would do unto themselves. discover their wants and needs. 15 Epilogue If you follow the Golden Rule only. 32 .
Which relationlems that keep them stagnant. and that keeps our relationships in motion. 16 Epilogue When it comes to cultivating your relationships. 33 . and camaraderie. in a way that deepens respect. sometimes so much that we can appear bitter. You have to get past this if you want your relationships to grow and become what you envision them to be. Of course. them forward. or workplace pettiness. such as people’s past transgressions that hurt us. Always focusing on the problems we have with people can make us stuck. you have look toAssignment ward solutions that move them Consider your list from forward—and not the probIdea 6 again. life will give you lots of practice because ever-changing situations and circumstances throw us curve balls all the time. ships have you stuck? Keep Too often we focus on the reading to learn how to move problems of a relationship. The goal is to look for solutions that move your relationships in the right direction. you have to focus on solutions that keep them moving forward. trust.Quick Ideas 15 to 16 Always Look Toward Solutions When it comes to relationships.
but reaching too far out of your comfort zone can damage your self-confidence. 34 . jot self aware. You do want to stretch yourself in moving relationships forward. 17 Epilogue As you attempt to take relationships to new places. Are derstanding of youryour expectations of yourself realistic? self? Do you know what sets you off? Do you know what calms you down? Do you know your interpersonal strengths and weaknesses? As you attempt to take relationships to new places. it might not be a good idea to try charming the office bully at this point in your life. if you struggle with feelings of inferiority. Do you down what expectations you have for have a reasonable unmoving each relationship forward. make sure you have reasonable expectations of yourself. A Assignment key to being socially intelligent is being Referring to your list (Idea 6).151 Quick Ideas to Improve Your People Skills Have Reasonable Expectations of Yourself Know thyself. make sure you have reasonable expectations of yourself. And your self-confidence is an important element of relationship building. For example.
18 Epilogue Having unrealistic expectations of others can lead to damaged or failed relationships. can office relationships and develop a you expect the person set of reasonable expectations for with whom you’ve just each of those people. Having unrealistic expectations of others can lead to failures.Quick Ideas 17 to 18 Have Reasonable Expectations of Others It’s also important to Assignment have reasonable expectations of others around Take inventory of your key you. and hurt feelings. 35 . started working to take direct criticism from you? Probably not. And those always result in damaged relationships. you also need to have reasonable expectations of others. For example. disappointments. You might need to develop a stronger rapport so this person knows—at an emotional level—that your criticism is constructive and well-meant. Just as you should have realistic expectations of yourself.
For example. they guide your decisions and your resulting actions. And the perceptions those actions give will send out a signal to people indicating whether your principles align with theirs.151 Quick Ideas to Improve Your People Skills Be Principle-Centered Here we draw from author Stephen Covey’s PrinAssignment ciple-Centered Leadership. with integrity and honesty? These are basic principles that people expect us to follow. 36 . 19 Epilogue Principle-centered behaviors clearly communicate to others who and what we are. They’re your moral GPS system. because in many ways your actions reflect your principles. Take some time to reflect which says that our prinon your principles. To what ciples should be the bedprinciples do you subscribe. you are a principle-centered Have you thought individual? about what principles influence you on a daily basis? Perhaps you should. and rock of our actions and the how do you act in relation to central driver of how we inthem? Do your actions show teract in relationships. so to speak. and they’re absolutely essential to building positive relationships. Yet principles are more than just actions. do you treat others fairly. And they act as an interpersonal beacon for others— telling them who you truly are.
Recognize. You may be pro-life. live. too. that’s okay. If you disagree. Recognize positions? that their principles are. to them. and play alongside each other quite effectively—if we simply acknowledge and respect others’ principles. Don’t let differing principles stand in the way of potential relationships unless you simply cannot live with the opposite view of something. recognize that Think about your principles. But respectfully acknowledge that others think differently than you do. every bit as valid as yours are to you—even though they might be different from yours. 20 37 . that people with different sets of principles and values can still work. and that their thoughts and their principles are also based on deeply held values and personal convictions. everyone else has a set How can others think differently of principles that guide from the way you do? Why is that their decisions and acall bad? Can you live with those tions. Believe me: This situation is much more rare than you think. You are both operating on a set of principles that are immediately and personally valuable to each of you. Others might be pro-choice. too. Don’t disparage others for their principles.Quick Ideas 19 to 20 Allow Others to Hold to Their Principles While you have Assignment your own set of principles.
151 Quick Ideas to Improve Your People Skills Epilogue Remember: Others’ principles may be different from yours. In fact. behavior just because you want to create or maintain a relationship. Remember: Boundaries provide predictability for others about what is appropriate in their relationships with you. and will accept and what we issues around which you want to will not. we all need to set Take some time to think our parameters of what we about principles. and they help set expectations for all parties. And when you determine what your boundaries are. behaviors. 21 38 . And if so. You might even have to allow all manner of want to make a list. is foul language unacceptable to you? Are there certain topics you feel are off-limits in some relationships? Are politics and religion two things you won’t discuss at work? These are only a few of many areas where you might feel the need to set boundaries. Some behaviors are just not acceptable. your principles will usually dictate what your boundaries are. Set Boundaries Setting boundaries is Assignment healthy. then do so. stick with them! For example. You shouldn’t place boundaries. but they are also based on deeply held values and personal convictions.
You don’t have to attack anyone who might be behaving inappropriately. vigorThink through some ways to deously and consistently fend boundaries without attacking defend them. and will appreciate yours. politely. Be personal and respectful. you let them violate your boundaries on that behavior. Explain why. Just ask them to stop. making everyone uncomfortable. stick with it. you start allowing exceptions. and you make them wonder when it is and is not appropriate. You remove the predictability from the situation.Quick Ideas 20 to 22 Epilogue You should have boundaries to make your personal interactions with others appropriate to you. Once others. If you let others use that kind of language around you. you remove the predictability from acceptable behaviors. Defend Your Boundaries Once you have set Assignment your boundaries. and people will be left wondering what is appropriate. If you have a boundary established about the use of four-letter words in conversation at work. But do defend your boundary. too. 22 39 . You’ll find others have them.
” When you are not genuine. Be Genuine Just be yourself. When that happens. Take a look at the relationships you are acting. Some people never do. you lose credibility and respect. when they have—or don’t have. Assignment You’ve heard that before. been genuine. those phonies we all run People know when into. but eventually they will know. 23 40 . Perhaps not initially or even quickly. Present only what you know. It just pays to be yourself. people will recognize that fact. And it takes a long time to recover from that. You have even met people like that and recognized them for what they are: the guy who says he’s really supportive of you or your program. Represent yourself and your feelings honestly. or the gal who acts as though she knows everyone who is important. Say what you believe. but says otherwise to people behind your back.151 Quick Ideas to Improve Your People Skills Epilogue Boundaries lead to predictability. and relationships are damaged. when you are acting a part or role that is not really you. and then you find out she’s just “blowing smoke. haven’t you? Think about others who have not Well. not what you’d like to know. and predictability leads to solid foundations for relationships. you are trying to be someone or something you are not. it’s true.
Are you too serious? Do you need to But don’t be so serilighten up a little? Or are you one of ous about everythose people others like to be around? thing that you can’t laugh. for example. smile. work. Don’t Take Yourself Too Seriously Life.Quick Ideas 22 to 24 Epilogue Be genuine. You know people like this. and Assignment relationships are important. He’s just no fun to be around! Don’t be that way. He’s a likable guy. People will respect you for it. never lightens up enough to talk about the ballgame over the weekend or the things his kids are doing in school. He takes everything so seriously that he never cracks a smile. Take Frank. focused on the mission. He’s always on task. but he’s so intense that most people can only take him in small doses. never tells a joke. or be happy. If you take yourself too seriously. but not someone they want to be around regularly. business all the time. and require Inventory your attitudes at work. others will begin to look at you as someone with whom they have to do business. Be one of those people you want to be around. serious attention. 24 41 .
If you force a sense of humor you don’t really have. you risk the impression that you are not serious enough. Be one of those people. Learn where you can be Don’t you wish you could humorous comfortably. 25 42 .151 Quick Ideas to Improve Your People Skills Epilogue People like others with whom they enjoy interacting regularly. Have a Sense of Humor People with a sense of humor are much more fun to be Assignment around than those without. laugh. But don’t force it. If spend some time hanging out telling jokes is not your style. or think about funny stories— George Carlin (RIP)? and share them! People with a sense of humor are much more interesting and fun to be around in all walks of life. If you overdo it. and don’t overdo it. or recount something you may have heard on TV or from the morning drive-time radio. people will recognize that. with Robin Williams. It humanizes you. Find a happy medium. So feel free to crack a joke.
Just remember that they are laughing with you. and that makes you approachable—a good trait to show. It’s laughing at yourself without concern for damage to your own self-esteem. So find opportunities to laugh at yourself with others. or their misers to do so as well. not at you. takes. thanks to sheer dumb luck!” These kinds of statements also demonstrate an important trait for likability: humility. their shortlaugh at yourself—and invite othcoming. For example.Quick Ideas 24 to 26 Epilogue Laughter is a universal magnet that binds people together and helps to create and maintain lasting relationships. Let them laugh with you. 26 43 . we warm up to people who can say about themselves. Really! We’ve all heard about “that arrogant SOB.” but what about “that humble SOB”? Laughing at yourself makes you human. “Yes. I’ve been very successful. Laugh at Yourself Self-deprecation is a Assignment sort of humor in which people make jokes about Think about ways you can themselves. Humble people are more likable than arrogant people.
First. view them Identify social goofs you as an opportunity to apolohave made in the past. Learn what you said or did that offended someone. If you make a mistake analyze what you did. Then. Cherish Your Goofs We all make mistakes. party. Or learn what behavior bothered him.151 Quick Ideas to Improve Your People Skills Epilogue People would much rather be around someone with the humility to laugh at herself than someone who is arrogant. Assignment Cherish mistakes as opportunities. Make it part of what you know about that other person. having experienced it once. the best remaffected the other person. and serves to demonstrate your willingness to not only recognize when you are wrong. Second. and gize. and edy is a quick and honest apolhow you rectified it (or failed ogy. how it with someone. well received by the other Learn from those experiences. Then examine the nature of a mistake is almost always the relationship after the goof. don’t do it again. it is a learning opportunity. An honest apology after to). but also to take responsibility for the mistake. 27 44 . These revealing moments help solidify your relationships because they often expose you to parts of personalities not always seen by others—both of your personality and the other person’s.
Study people and education. throughout the years we work in them. 28 45 . Social Skills Are Always a Work in Progress Your ability to Assignment interact with people. The important point is to take responsibility for them and learn from them. Stay on top of comtinually learn our promunication and language trends. just as we continually learn more about working and interacting with people the more we do it—which is all our lives. individually and in Keep learning and improving your groups. Almost no one ever really becomes an expert at this. We contheir behaviors. Always look for ways to improve your relationship skill set. you’ll get a nasty surprise as someone comes up with a new behavior or concern you’ve never seen before.Quick Ideas 26 to 28 Epilogue We all make goofs. People change. is an ongoing relationship skill set. and we learn from our mistakes and our successes—and the mistakes and successes of others. Never assume that you’ve got it down and are now an expert. Once you do that. and fessions and trades even fads. different generations have significant differences in interactions and backgrounds. So always be a student of people and relationship-building.
That’s reputation at ably does you no work. 29 46 . and. Epilogue Learning about people and relationships should never stop. it probthing about you. and all subsequent communication and behavioral interactions you have with others. someone and he already knew someAfter that. establish your reputation as to what kind of person you are. But your actions at a first meeting. And younger generations need to be aware that older generations are different.151 Quick Ideas to Improve Your People Skills Older generations need to continually learn about the relationship rules with younger generations (got teenagers?). Relationships are dynamic activities. good at all. And word gets around. It might not even survive the day. recognizing that people change in time. you need to adapt to those changes. others will know about you even if they have never met you. and that their relationship expectations may be different as a result (got grandparents?). Your Character—and Your Reputation—Is Your Calling Card Your business card is a useful item: Assignment It identifies you at Consider the times you have met that first meeting.
positive relationships. You’ve met them: the know-it-all. That means you have to own up to who you are—and you have to like it. and it might not. for good or for ill. the embellisher. the goodie-two-shoes. Be Authentic If you want people to trust you. So. The list goes on. will help establish your reputation with others you may not even have met. It all depends on what other people have to say about you—that’s what your reputation is based upon. So always remember that every relationship you have. And that’s a lot of wasted effort. No one is perfect. It can make you if you work at solid. See them as valuable to your future. every relationship is important. Epilogue Reputation is your presence in the relationship marketplace. you have to keep it real. It might be right. Work at them. and set up an expectation for the relationship that is about to be developed. All of them. And that word of mouth has already established your initial reputation with that person. or break you if you ignore those relationships.Quick Ideas 28 to 30 “Say. We all have flaws. But so many people spend time trying to make us believe they’re flawless. Think about it for a moment: If you’re presenting yourself in some 30 47 . It takes a lot of energy to hide our faults and pretend to be something more than we are. She already knows something about you from word of mouth. aren’t you that guy who works with John in the accounting office?” Ever hear something similar to that when meeting someone new? That’s a signal.
Epilogue Keep it real. And that sets you up to attract friends and confidantes whom you can trust. strengths. Assignment Think about times when you’ve felt as though you needed to appear as more than you are. Standing firmly in the center of who you are is a true demonstration of strength. it’s reaping what you sow. In other words. and then look for ways to live up to who you really are. If you sow authenticity. How did you feel about yourself? Did it take a lot of energy to keep up the pretense? Then think about the times when you were able to be completely honest about who you are. more courageous. you’ll be found out. we can actually relax and begin to feel safe in the fact that people who like us. values. then you’re developing a false reality. you’ll reap it as well. “putting it out there. principles. We wrote earlier about operating within a set of principles (see Idea 19). 31 48 . Did you feel stronger. In time.” When we own who we are with modesty and honesty. Be authentic. In short.151 Quick Ideas to Improve Your People Skills form of false light. genuinely like us just the way we are. Real relationships are based on the reality of our true character. and honest? Give these two situations some thought. and weaknesses. Act With Integrity Acting with integrity is a simple concept to understand.
acting according to those If your principles are clear. they will begin to expect certain behaviors of you. have integrity. then. and successful. Trust is defined by the Merriam-Webster dictionary as “assured reliance on the character. ability. Having integrity means behaving in predictable ways based on your principles. Commit to consistently to those principles. Your beTake stock of your prinhaviors should.” 32 49 . That provides predictability and it also indicates to people that you have a clear set of principles on which you base all your actions. sought after. Epilogue People who act with integrity are respected. conform ciples again. The key here is consistency. or truth of someone or something. As others learn your principles by observing your actions.Quick Ideas 30 to 32 These principles are based on Assignment your personal values. strength.” and “one in which confidence is placed. Build Trust Trust is essential to any good relationship. Reflect on your you act according to those prinrecent actions to ensure ciples consistently. and principles. then you that you are doing so.
You made the promise. Others will place confidence in people who are trustworthy. is a strong basis for good relationships. That means they Assignment need to rely upon you and Find opportunities to demwhat you do and say. trust is an inteThe more you demonstrate gral part of your reputation. that you will act as you say you will act. that people will place trust in you. You must do as you have told people you will do. And trust. It doesn’t happen overnight. Don’t pass it off on someone else. but rather in time and trial as you consistently demonstrate that you can be trusted. that you will do what you say you will do. means you do what you say and say what you do. Epilogue Trust is built upon trustworthy actions. Say what you will do. and do what you say you will do. Similar onstrate your trustworthiness. trustworthiness. to integrity. the more If you are trustworthy. If you are trusted. Trust is built on experience with you. people will have the confidence to work with you and know you will be part of the team. you keep the promise! 33 50 . as we have already said. How do you build trust? By being trustworthy. If you make a promise to do something.151 Quick Ideas to Improve Your People Skills You want to have others trust you. Keep Your Word Keeping your word is the bedrock of relationships. do it.
or. or difficult.. and deliver on what You will be judged by your reyou promise. or. Concentrate on keeping or challenging. Not rudely. that sounds like Assignment a cliché.. Be Straight Up Yes. deliver. crudely. or meanly. liability. Being straight Practice being straight up.. and forthright—but always with compassion (see Idea 91). This is the physical manifestation of trust. or. But your word. Nothing damages your relationships with others more than not keeping your word. That means telling people—those who ask—what you really think. don’t promise it. If you can’t do it. people will be reluctant to maintain a relationship with you. If you can’t be trusted. 34 51 .Quick Ideas 32 to 34 Sometimes this is painful. doesn’t it? But it’s not. If you don’t keep your word. Make sure you it is critically important that only promise what you can you keep your word to others. Read up means being honest Idea 91. Assignment or time consuming. Epilogue You will be judged by your willingness to do what you say you will do. you can’t be trusted. but in a constructive way.
a straightforward conversation can strengthen your relationship. Enchallenge to develop. You can also think of discernment as accurately perceiving others’ actions and motivations. that means you are a good judge of character in others. even if they differ from their own. straightforward views. View Discernment as a Gift It’s true that not everyone is discerning. Epilogue Being straight up is helpful to others when done tactfully. and not just what they think we want to hear. Discernment is the ability to see what is not evident or clearly obvious. Oh. In fact. you want a relationship. it’s a Assignment difficult trait that is a real Practice this by doing it.” We often feel we can trust them. you want to know what it is? Okay. Discernment can apply easily to people: If you are discerning. If gage in actively learning about the you have it. or a good judge of someone’s motivations for doing or not doing something. As long as what you say is respectful and shared with sensitivity. People do things and sometimes you 35 52 . cherish it and people around you with whom use it.151 Quick Ideas to Improve Your People Skills People actually appreciate honest. people respect those who are constructively “straight up. because they will tell us what they really think.
If you are truly discerning. 36 Assignment Do it. Are you behaving as you would want them to behave? 53 . But that means we also need to show others respect. Indeed. with respect. Work at developing this facility. Think about it as you deal and work with others. we have a right to expect it. No matter who we are or what role we play in any organization.Quick Ideas 34 to 36 don’t know why. Learn what motivates them. Epilogue You become discerning through experience and knowledge about other people. and we all deserve it. we all expect it. Always Show Respect Aretha Franklin sang it clearly—R-E-S-P-E-C-T! We all want it. the ability to do this comes with experience and real effort to get to know and understand other people as individuals. you can sense their motivations. Having insight into how people might react or behave in certain situations is extremely valuable. Get to know people and why they do—or don’t do—things. Learn what drives them. you will have discernment. Treat others the way you want to be treated. The ability to do this comes with experience and a real effort to get to know the other person. however. Once you know those things. You can also think of discernment as insight. we want respect from others. Again.
But it does mean that we should be treating others with the same respect we would ask them to use in their dealings with us. others’ strengths time someone else comes up and weaknesses. overlook shoddy efforts.151 Quick Ideas to Improve Your People Skills This doesn’t mean we should ignore problems. We all need to recognize that we are different. Practice Tolerance When we speak of tolerance. You set the tone for your relationships with others by your behavior toward them. Oh. and so on. short on an assignment or Remember that people are expresses the. or disregard inappropriate behavior. Epilogue Remember that it always starts with you. Sometimes they seem to be barriers to relationships and to getting things done. and heard” type of thought. values. opinions. wait. After all. Sometimes some or all of these things can seem to be a problem. Practice this the next others’ ideas. “that’s the all different and they have difdumbest idea I’ve ever ferent attitudes. The Golden Rule (revisit Idea 14). and to cherish those differences. if we were all the same. we are talking about tolerance for Assignment other people’s points of view. it would be a boring world! 37 54 . They also have different strengths and weaknesses. Does this sound familiar? Yep.
Someone makes a mistake because she is not as good at something as you are. Don’t fly off the handle. Agree to disagree. easy: the definition of the word. are the real-world emotional responses some words can bring about. Choose Words Carefully Words can kill! Choose Assignment the wrong words and they Make an inventory of emocan kill a relationship. and move forward with your tasks. After all. tionally charged words and In fact. Connotation is the concepts 38 55 . Understand her shortcomings and offer to help her overcome those shortcomings in knowledge or skills. words have two phrases. don’t you? Epilogue We all need a little tolerance for differences and weaknesses in order to create successful working relationships. and consciously begin to meanings: denotation and eliminate that language—those connotation. that will surely damage any relationship you may have with her. Work with her. however. Connotations.Quick Ideas 36 to 38 Recognize that everyone has a right to their opinions and values. The key tolerance you need to develop is for those people who don’t have the skills or knowledge you do. We learn those in school. even if they disagree with yours. Denotation is words—from your vocabulary. you want her to tolerate your shortcomings.
Some words are deadly in and of themselves. Some words. and sometimes not. And some have made their way into our everyday office speech. and it’s a dirty word in the PR profession. Often we find ourselves using these emotionally charged words in moments of stress. Its definition and connotation are so negative that it evokes strong reactions from just about everyone. It’s also a bad idea to use the phrase spin doctor when talking to your public-relations professionals. You would never consider using the “n” word for any reason to anyone. or frustration. many people bristle at the phrase “dumb it down. ignorant.151 Quick Ideas to Improve Your People Skills and ideas that come to mind when the word is spoken. which are sometimes rooted in its definition. And never try to set someone off emotionally using words. it’s not a good idea to use the word shyster when talking to your company’s legal counsel. avoid words to which you know they will react negatively. Epilogue Using the wrong words can significantly damage relationships. For example. however. That’s manipulative (see Idea 45). When in the company of people with known special interests or concerns. For example. Words such as stupid. come with an intense emotional charge that goes beyond their definitions. Spin doctor implies lying and deceit. anger. and general profanity are all pretty obvious. Be conscious of words so that your interactions can remain on a neutral and common ground. dumb. 56 .” which clearly communicates that you think you’re higher on the intelligence food chain than those around you.
One or more people will constantly express. it leaves people with the impression that the speaker (or writer) is focused only on what the speaker wants. it’s no problem. That’s the pronoun I. However. Do you overuse it? Are you in find ourselves using a lot danger of communicating an I foof pronouns: I. them.. and so making changes in your commuon. not the job or the company. and communicate that they are focused on themselves.. Assignment and in our daily interactions with other Examine your use of the word people. And generally nication to include fewer Is and there’s no problem using more wes. we. “Well. They want to communicate that they have the skills and abilities to do the job for which they are applying. start they. They serve as a sort of shorthand in spoken and written language that simplifies our communication. we generally I. Here’s the problem with I: It implies a focus on yourself. they use far too many Is.” This all too often results in group members 39 57 .Quick Ideas 38 to 39 Words: I vs... I think.” or “I expect that. but in doing so.” or “I would like to see.. if used constantly. not what others may want. If used occasionally and sparingly. cus instead of a we focus? If so.. me.. We In our daily lives. We see this constantly when teaching students to write cover letters for their resumes when job hunting. We see it in small group discussions too. one of them can be damaging to relationships when overused. Very useful. he... However. them. she.
Epilogue I equals me. When making observations about someone’s performance. but be kind about how you express the actions and the remedies. Start practicing the same that communicate kindbehaviors.” At the same time. it allows everyone to express their opinions. Be honest about the performance. Expressions of congratulations when due are always bridge-builders.151 Quick Ideas to Improve Your People Skills feeling as though the person speaking is only focused on his perspective. “I’m in this with everyone else and I want to support the group outcome. ness. 40 58 . ideas. we equals us. Especially in group discussions. points of view. A much better word is we. and not interested in others. We says. as are expressions of condolence when sad personal events take place. we connotes a group focus. Part of being have been kind to you in converpositive is using words sations. and information in a neutral. not an individual focus as does I. use positive words. groupfocused approach. Use Kind Words Being positive is alAssignment ways a bridge to good reThink about the times people lationships.
They are bedo so in the future. positive and kind words are always well received. Howard Have you practiced any of the Lambert calls these the Four Horsemen? If so. contempt. But all too often what is expressed is global criticism that is meant to apply 41 59 . What are kind words? “Congratulations. an individual issue. if practiced regularly. defensiveness. Well done! Great! Super job. Dr. His horsemen are: criticism versus complaint. Lambert notes that a complaint is okay. haviors that.” Epilogue Kind words build bridges in relationships. seriously damage or destroy relationships—or block any chance at having one. I’m sorry. rarely are negative and critical words. Don’t Kill Relationships With Your Behavior Some behaviors can destroy relationships with Assignment other people. Let’s examine each briefly. Criticism versus Complaint.Quick Ideas 39 to 41 When observing opinions and reacting to others’ statements. examine the Four Horsemen of the circumstances and resolve not to Apocalypse. because it addresses a single issue or behavior. Good work. and stonewalling. People recognize a complaint as just that. let’s examine the situation and work out how we can do better in the future.
It is almost never helpful in moving a relationship forward positively. leads to misinformation—intentionally or not. the problem is not with me. And when it does. whether true or untrue. 42 60 .” That’s global and far-reaching—and may not be true. it’s with someone else. We often express contempt with condescension and negative facial expression such as a sneer. fair enough.” Okay. This effectively stymies any attempt at resolution. Defensiveness is an avoidance of responsibility. Often this happens in arguments and strong disagreements. Stonewalling. When you get defensive about something. Contempt. it is incredibly damaging to that person. Defensiveness. A criticism is “you always miss your deadlines. or with the use of sarcasm. No communication—no relationship. Stonewalling stops the communication. A complaint is “you missed the deadline on that proposal.151 Quick Ideas to Improve Your People Skills across the board. it makes the other person feel attacked. This is tuning someone else out of the conversation. Gossiping about other people. Someone stonewalls the issue by tuning out. It says. you are trying to pass responsibility onto others. Epilogue Beware the Four Horsemen! Do Not Gossip Let’s be clear: Gossip is creating rumors and innuendo about other people’s professional or personal lives that may or may not be true.
When that happens.Quick Ideas 41 to 43 If you engage in repeating Assignment rumors and stories you hear Think back to times from others. Remember It’s saying. then you know that being dismissed as accounting for nothing is a terrible feeling. and happy about that? don’t stick around when others do it. Stay away. Don’t do it. Were you from gossip. Stay away your attention. and if you were involved. Remember: People almost always find out who spread gossip. Bad stuff. that her thoughts and ideas don’t matter. Epilogue Gossip. your relationship with that person will be seriously damaged. Don’t Be Dismissive Being dismissive is making Assignment someone feel as though she is not Recall times when worthy of your time or attention. you do so at the when others have gossiped risk of spreading disinformation about you and it came to and misinformation. If you have ever been treated this way. perhaps in words and how that made you feel? perhaps by behavior. someone has treated you It’s belittling her by ignoring her. It’s 43 61 . dismissively.
He exwho did it to you? presses ideas or responses that clearly suggest that he is better then you are. then prepare to have weak or even hostile relationships with them. often. or—well.151 Quick Ideas to Improve Your People Skills demeaning and makes you feel as though you are worth less than others. but are clearly inyou appreciate it? What did tended to express someone’s you think about the person superiority over you. You don’t want to be treated this way. Is this any way to establish a relationship? Of course not. PaThink about some of tronizing behaviors are those the times people have been that seem polite and positive on condescending to you. or more knowledgeable. Did the surface. If you treat others dismissively. Don’t Be Condescending Condescending behaviors are those that result from a paAssignment tronizing. superior attitude. want some examples? Here you go: 44 62 . angry at the person who treated you that way. Epilogue Treating people dismissively is more likely to create enemies than friends. and then. and neither do others. You’re depressed.
Don’t Be Manipulative Trying to manipulate people is an effort to get them to do something they really don’t want to do. or the threat (as in coercion) of negative consequences. implied rewards. and really have no place in your interactions with others. That can be by threatening. Bribery is one way. but it will always come back to haunt you. but I don’t know what it has to do with what we are talking about. lying.” “That’s an excellent idea. We are all refreshed by your unique point of view. And it’s not always money. choose behaviors that build collaboration and trust (revisit ideas 36 and 40). Use of these techniques results in one party having power over the other.” These behaviors can be damaging.Quick Ideas 43 to 45 “Thanks for your thoughts. Epilogue Condescension leads to destructive relationships. Using manipulative behavior. intimidating. you might get your way once or twice. or some other nasty behavior. you attempt to get her to fall to your agenda by unethical means. and that’s not a mutually beneficial relationship. 45 63 . When you manipulate someone. it can be favoritism. This is known as a win-lose. Instead.” “Let me see if I can put that in terms others can understand.
yet we make them all the that circumstance and think time. So analyze ous. We assume that other about how it might have people know something. We turned out better if you had assume that others are in agreeavoided the assumption and ment with us without ever askjust asked. We base these assumptions not on hard evidence.151 Quick Ideas to Improve Your People Skills Assignment To learn more about what manipulative behavior can do for your career. ing them. but upon guesses. read Idea 135. and partial information. Epilogue Manipulating others is unethical and leads to negative relationships. things others say. 46 64 . We assume that someone has the skills to do something. We assume that people want the same things we do. We’re sure you’ve done Assumptions are dangerthis in the past. Don’t Make Assumptions Do you know what assume Assignment spells? Assumptions make an ass of u and me.
Their pessimism creates a barrier that others either can’t penetrate or don’t want to bother to. as in a group meeting. Avoid making assumptions about anyone or anything. always to be an optimist. Assignment you have a negative expecExamine your behavior. Worse. ways expect that the worst will happen. who often gain a reputation for being overly critical and negative. So. If you don’t know for certain. 47 65 . It’s simple: The ing to be around. you risk damaging your relationship with that person. Don’t Be Pessimistic If you are a pessimist. or at least neutral. others will recognize you made an assumption. ask. You alglass is really half full. and the knowledge will damage your relationship with them. only to be surprised when he expressed the opposite view? If you do this in front of others. Are you always negative? If so. You’re gloomy and depressheal thyself. Epilogue Assume spells bad news. People tend to avoid pessimists.Quick Ideas 45 to 47 Have you ever spoken with someone and assumed he agreed with your position or idea. It’s hard to create a positive relationship with a pessimist. tation of every outcome. They are not fun to be around.
They are verbal pessimists. Cynics are difficult people. They not only assume the worst.” “If you think that can happen. Worse. and people avoid them.” “You will never get him to agree to that. always. you think Assignment pessimists are bad? Identify cynical attitudes in othCynics are worse! ers.” And often they are sarcastic and nasty about it: “You’re out of your mind.” Wow! Talk about negative vibes! 48 Epilogue Cynics are verbal pessimists. everyone and everything.” “You’ve been around her long enough to know we will never be successful. 66 . A cynic assumes the worst. and watch how others respond A cynic distrusts to them. but they also tell you about it in no uncertain terms: “That will never work. you come from a different planet.151 Quick Ideas to Improve Your People Skills Epilogue Pessimists are depressing to be around. a cynic is very vocal about it.” “This company will never achieve that goal. Don’t Be a Cynic Okay.
especially the person who brought him the bad news. When they past who overreact to things. He would go on for an hour. Take Jim. The result? Things festered and got worse. 49 Epilogue People avoid those who overreact. you avoid them? Do you overreact? Well then. It made for an ugly work situation until he calmed down. Sometimes bad things Identify people in your life or happen. yelling and screaming at everyone. don’t overreact to you like it when they do it? Do them.Quick Ideas 47 to 49 Don’t Be Over-Reactive Sometimes things Assignment do not go as planned. Then he was fine. Do do. But that hour— wow! So what happened? People started avoiding the prospect of bringing him bad news. It creates fear in subordinates and coworkers. making relationships difficult. and destroys relationships. It reduces your interactions. This reduces your effectiveness. especially bad news. He would blow up. take a look at Idea 71. for example. If you overreact to things. 67 . people are going to start avoiding you.
And listen. overbearing. the synonyms for domineering: authoritarian.151 Quick Ideas to Improve Your People Skills Don’t Be Domineering A domineering perAssignment son is one who always wants to be in charge. So don’t do it yourself. bossy. bullying. 50 Epilogue Domineering personalities tend not to have positive relationships with others. tyrannical. dictatorial. be open to consultation. Whoa! Do you like that kind of person? Do you like hanging out with someone who is domineering. high-handed. take a look at Idea 70. high-andmighty. and bullying? I doubt it. For thoughts on how to be less Consider some of domineering. discussion. Allow others to express their thoughts and opinions. 68 . bossy. autocratic. dominating. and group exploration. Instead.
If they are avoided. haviors. As a consequence. Are you overly opinAllow others to have their own ionated? Do you force your opinions. and he expresses it vociferously and with emotion. it’s just that you should avoid forcing them on Examine your own beothers at every opportunity. opinions on others? Perhaps Another way to look at you need to rein in those this is to find that person— opinions a little. you know one—who lets you know his opinion on everything all the time. overly opinionated people tend to be avoided. it’s done with the attitude and assumption that other opinions are irrelevant. 51 Epilogue Those who are overly opinionated are also often avoided. they also don’t get much opportunity to create solid relationships.Quick Ideas 50 to 51 Don’t Be Overly Opinionated It’s not that you should not Assignment have opinions. Often. 69 . yes. He usually doesn’t even wait to be asked his opinion.
Assertive people are. If fact. and are not interested in others. and powerful in building relationships. take a look as Ideas allowing others a role or in77 through 79 for some ideas put in the decision or situto adjust your behaviors. As you can guess. 70 . you are only interested in satisfying your own objectives and concerns. It’s Assignment taking charge too often and Are you too aggressive with too energetically without others? If so. and they certainly don’t ignore others. looking for a mutually beneficial outcome. ation. 52 Epilogue Be assertive. similar to those in a good relationship. When you are being aggressive. aggressive people tend to ignore the interests of others. being aggressive is not conducive to building good relationships with other people. Being assertive is much more acceptable. not aggressive.151 Quick Ideas to Improve Your People Skills Don’t Be Overly Aggressive Being aggressive is akin to being dominating. Assertive behaviors simply ensure that one’s opinions or ideas get expressed. But assertive people don’t try to dominate the process.
you are helping her become better at her job. If you help someone at work by giving him the opportunity to do things he has never done before.. processes. on their job. 53 Epilogue Giving of yourself to help others is a powerful relationship-builder. more effectheir profession.. when you are truly supportive of them and want them 54 71 . or providing new information and knowledge. If more capable. you are part of a very positive relationship. They appreciate othLook for ways to help others who are aiding them in ers become better. Assignment People like others who help them. tive. more. or new responsibilities he can grow with. you help someone grow in life. you have participated in his future success..Quick Ideas 52 to 54 Help Others Grow All right. more knowledgeable. If you help someone at work by showing her new skills. this one’s simple. or in their personal lives. Believe in Others When people know that you believe in them. that you are in their camp.
) How do you express belief? In simple comments and behaviors.151 Quick Ideas to Improve Your People Skills to succeed. When they have successes. Be there for them if they have questions or need assistance. they perform badly. (Think back to the Pygmalion Effect in Idea 11. when people don’t think you believe in Read ideas 92 through them. The more you believe in others. rational. Epilogue Believing in someone else strengthens both parties and your relationship. they always do betAssignment ter. Say yes. 99. There is a great deal to say for working with others who are calm. This is a simple management principle that should also be applied to personal and professional relationships. When they need help. Start by being encouraging. Tell them you care and want them to succeed. Conversely. the better they will do and the stronger your relationship will be with them. friendly. and express that belief. provide it. the stronger the positive relationship you are building with that person. and 55 72 . congratulate them. When they have failures. help them improve and overcome. Wage Peace in Your Relationships Make your relationships peaceful ones. The more you demonstrate that you truly believe in someone else.
Quick Ideas 54 to 56
easy to be around. On the Assignment other hand, when relationIdeas 77 through 151 can ships are stormy and full of help you wage peace in a variety argument and emotion, the of circumstances. relationship is much more difficult to maintain. Peaceful relationships are often much more positive and easier to maintain. If you are in a peaceful relationship, it’s comfortable and rewarding. You are much more likely to seek out people like this than those with whom you have stormy relationships.
Epilogue
Which would you rather have: peace or war? In relationships, it’s clearly peace.
Be a Peacemaker Between Friends
Throughout your career, you will occasionally find yourself in a position of having two friends that disagree heatedly about an issue. When that happens, you may have to make peace to get things back on track. If you have to, be the go-between because you have a good relationship with each person. Or, you might need to call a private mediation between the three of you. But be cautious, however,
56
73
151 Quick Ideas to Improve Your People Skills
about appearing to take one side or the other. Your goal is to facilitate, helping them hear and understand each other. Just remember to remain neutral.
Assignment
Think back to a time when a friend helped you and another to hear and understand each other.
Epilogue
When necessary, help friends make peace with one another.
Respect Different Personality Types
Each person has his or her own personality type, which Assignment affects how he or she works, Recognize the differing how he or she lives, and how he personality types around or she interacts with others— you, and find ways to underthese personality types can be stand and work with them. very different from each other. That’s one of the things that makes people unique and interesting. It’s important to recognize that not everyone operates the way you do. Type-A personalities may show up at work two hours early and stay two hours late, and then take work home with them. In
57
74
Quick Ideas 56 to 58
contrast, a Type-C personality may do what she is assigned, arrive and leave on time, and likely not take work home. These different types are all valid, and influenced by a combination of genetics and environment. So don’t judge people based on their personality types. Instead, respect the traits of their type. They will not change for you, believe me!
Epilogue
People are different. Live with it.
Understand Different Styles
For years, social scienAssignment tists have studied and clasTake a look at the four persified different personal sonality types. Which best destyles to help the military scribes you? and corporations encourage better working relationships. Personal styles are not to be confused with personality types. A Type-A personality could have a combination of two—and occasionally more—personal styles that make up his own individual approach to people. So what are these personality styles? Well, there are several “systems” consultants and scientists have developed, but they almost all fall into the following four categories: the Analyzer, the Supporter, the Director, and the Creator.
58
75
Styles are ways of behaving. Epilogue When you understand someone’s dominant personal style. one style is usually dominant. ways of looking at and interacting with the world. Recognize That Styles Differ From Opinions Styles are not opinions. sensitive and caring. you can apply the Platinum Rule effectively—and figure out how best to interact with someone who leans toward a particular style. the Analyzer is the thinker. results-oriented. but will base those opinions on varying information and approaches. Despite there being four definitive styles. is disciplined. She likes to investigate situations in detail. most individuals combine two styles to develop their own unique approach to interacting with people. these style labels are general in nature. However. The Supporter is. and a practical problem-solver (think of Spock on Star Trek). 59 76 . as you can imagine. both imaginative and innovative. People in each style category will still have opinions. Seen them around the office? We bet! Of course.151 Quick Ideas to Improve Your People Skills In brief. tending to be sociable and cooperative with coworkers. and will likely come to different conclusions. on the other hand. And finally we have the Creator style. he is the free spirit in the group. and if you can identify it in someone. and prefers precision. you can better apply the Platinum Rule. The Director.
Analyzers will likely for arriving at it. Know Your Own Style So what is your style? It’s nice to know others’ personal styles. the opinions they arrive at could be worlds apart. by contrast. just his method ample. Epilogue Don’t assume someone who has a similar personal style as you shares your opinions on issues and situations. 60 77 .Quick Ideas 58 to 60 Regardless of personal Assignment style. form their opinions primarily through facts. Supporters. a person’s opinions are Remember not to stereousually based on her principles type people’s opinions based and values. Knowing personal styles is not a shortcut to listening to and understanding others. might weigh emotions and relationships. For exhis opinion. The point is that. but it’s also important to know your own—and how it is similar to or different from others with whom you interact regularly. But they could share similar opinions on a particular issue or topic. Someone’s greatly from others in the same personal style does not reflect personal style category. and can differ on their styles. even if two people have the same personal style and may form opinions by a similar process.
Epilogue Know thyself. see Idea 61. Stretch Beyond Your Style To truly apply the Platinum Rule.151 Quick Ideas to Improve Your People Skills Are you predominately the Assignment Analyzer? Maybe the Supporter. 61 78 . or the Director? Or To learn more about intemaybe you get to have all the grating your style with other fun as the Creator? personal styles. Does knowing this about yourself help you see why you might mesh or clash with some coworkers? Does it help you see that it’s not that you’re right and your coworker is wrong—or vice versa? Can you also see why the Platinum Rule is just as important as the Golden Rule when developing relationships? Think about it. you will have to stretch beyond your own style and try on another when interacting with others. But you’re a Creator. And you would need to do the same for her if you’re truly going to have a solid relationship. If you’re a Creator. and a Director is trying to relate to you. Instead she would need to apply the Platinum Rule. maybe she is following the Golden Rule (she’s treating you in a way that meshes with her style). right? So. treating you as she wants to be treated probably doesn’t work for you. and you will begin to know others.
Then you can better determine what he needs. You may need to step out of your own personal comfort style and get into the head of your Supporter colleague—who works and comes to decisions based on how people feel. if you Assignment are an Analyzer and you’re working with a Try elements from different Supporter. Embrace Different Styles “Differing styles” does not have to equate to “conflicting styles. Again. then you styles to stretch yourself. 62 79 .” In fact. you should perceive them as complementary. numbers and statistics may not get you there. will need think beyond the logical side of something and investigate the emotions that drive the situation. correctly applying the Platinum Rule requires you to stretch into another style.Quick Ideas 60 to 62 For example. Each person. and truly try to understand how your colleague arrives at conclusions as he does. and you can use that approach to build emotional equity with him. Epilogue Learning through experience is the best way to gain insight into the elements of personal styles.
you can look at an issue from four angles. For example. who brings emotion and harmony to the process. You’ve never been here before. The Supporter is the people person. The Director brings a focus on the practical. and their talents. Talk about seeing all sides! Epilogue All styles are all valuable. You don’t know anyone. or interaction. systematic approach to process.151 Quick Ideas to Improve Your People Skills with her own unique verAssignment sion of each style. You want to run. There is great strength in this variety. your heart starts to beat rapidly and you break out in a sweat. if you recognize it. event. Value people’s differences and understand how strong work communities are built through this diversity. 63 80 . the Creator brings innovation to the equation. Suddenly. perspective to every acThink about which styles apply to tion. You want out of this situation. Determine if You Are Shy You enter a room or a situation with which you are unfamiliar. instead of just one or two. Embrace them. With these varying styles. different people you know to help you understand them better. brings a different approach and Value the different styles. The Analyzer brings accuracy and an orderly.
read we face unfamiliar situations. Obviously. You need to be able to meet people and develop relationships with them. Overcome Shyness Some very famous and successful people were shy at one time.Quick Ideas 62 to 64 If you have ever experiAssignment enced something similar to this. 64 81 . Are you shy? If so. some more than others. most often experienced in social settings. you need to combat that. Henry Fonda. If you are shy. Idea 64 for help. you are probably shy. did you know that researchers estimate that as much as 50 percent of the population of the United States is shy? And that percentage is increasing. Barbara Walters. It’s Examine yourself honthe desire for withdrawal when estly. Not you? Well. Epilogue Shyness is a barrier to relationship-building. If your shyness is getting in the way. Many of us have experienced this. Johnny Carson. It can be a real barrier to building relationships if you can’t meet and get to know others. then you need to work on overcoming shyness. and overcame it. and so can you. they overcame the issue. and Gloria Estefan were all considered shy at one time in their lives.
Learn to smile. Get more help with the following assignment. Epilogue You can overcome being shy! Overcome Feeling Inferior Feeling inferior to others is a matter of having low self-esteem. either professionally or personally.151 Quick Ideas to Improve Your People Skills Here are some tips for overcomAssignment ing shyness: For more help with Work on your listening and overcoming shyness. 65 82 . It’s an ice-breaker. Learn to ask questions. That starts conversations. When you feel inferior to others. you believe that everyone else is better than you. visit www. communications skills.com. such as body language and facial expressions. nonverbal communication. Let others take the lead in conversations. Observe others in environments that elicit shyness in you.overcomingLearn more about reading shyness. then jump in after the discussion has begun. These are just a few ideas. Shyness is a major barrier to developing relationships.
And those successes will.” 83 . Inferiority is a barrier to creating positive relationships. Finally. Recognize that no one is truly inferior to others. Just tion. You do need to take action. then make them. Yes. Low selfesteem can leave us with a distorted view of reality. remember that you do have strengths. and that can be difficult to see around. If you do need to make improvements. Use them as often as you can. because feeling inferior will be a barrier to creating meaningful relationships with others. Epilogue Eleanor Roosevelt once said. Find out what yours are. Do you need more training? Do you need a mentor? Do you need more experience? If so. recognize that this feeling is almost always unfounded. you may well have had some bad experiences in which your ideas and actions did not achieve the desired result—we all experience that. empowering. Overcome that inferiority. You might need to start with Idea taking action can be 105. First. Failures are almost always caused by a series of factors.Quick Ideas 64 to 65 If you feel infeAssignment rior to others. “No one can make you feel inferior without your consent. you need to do someIf you have this problem. take care of it. Identify them and build on them. too focused on one situation or set of circumstances. in time. help combat the feeling of inferiority. take acthing about it. and you will be rewarded with successes. But it’s very likely you’re just forgetting your successes. Yet you can turn this situation around. And low self-esteem is often caused by real-world experiences—experiences that have somehow suggested that you don’t measure up.
151 Quick Ideas to Improve Your People Skills Overcome Feeling Intimidated Intimidation is similar to feeling inferior. then you need to look at your relationship with that person. And if your own emotional bank account is running on low or empty. or even hiring legal counsel. Recognize that in these cases you are unlikely to be able to establish a positive relationship with this person. You made us feel fearful or emomight want to start with Idea tionally overwhelmed. then you need to address the problem—by going to your human resources department. 66 Epilogue You can beat intimidation if you are the problem. overcoming intimidation is essentially the same as overcoming feelings of inferiority. But if the intimidating feelings you’re experiencing are founded on real threats or coercion. So do it. But 105 first. 84 . domineering and reactive? If so. If you are feeling intimidated by someone. Is he the boss? Is she the office bully. you might need to replenish it (ideas 103 through 113) before you tackle this matter. it’s easy to see why you feel intimidated. The difference Assignment is that usually someone else Overcome intimidation has done something that has with the steps identified.
being overly talkative is a detriment to your relationships. you need to get your Chatty Cathy interactions under control. But usually it stems from fear and low self-esteem. They know if they stop by your office to say good morning. Although being social is a good thing. people can’t get their opinions and ideas into the conversation. people will it being almost a soeither avoid you or directly tell you so. Being too talkative actually undermines your personal power and influence with others. Secondly. 67 Epilogue Being too talkative undermines your personal power and influence with others. if you talk too much. cial handicap? Examine your situation and relationThere are many ships to determine if you talk too much. it could take half an hour to get away again. First. If you know this is you. reasons for this. People shy away from conversations with talkative people because they can’t participate fully in the conversation. 85 .Quick Ideas 66 to 67 Don’t Be Too Talkative Do you talk a Assignment lot—to the point of If you talk too much. and being too talkative might integrate with a dominant or opinionated approach. talkative folks tend to eat up people’s time.
Social think your talkative nature is scientists tell us that effecmore than just a bad habit. Get Out of Your Own Way Think back to Idea 41. That means that we need to spend at least as much time listening as we do talking—and some cultures believe you should listen twice as much as you talk. just learn to listen Skip ahead to Idea 85. 68 Epilogue Listening is the true cure for a talkative handicap. tween people needs to be equitable for relationships to take root. Don’t Talk The cure for being overly talkative is simple: listen. and it’s based on feelings of insecurity or low confidence. Remember those behaviors that can kill relationships? Well.151 Quick Ideas to Improve Your People Skills Listen. those behaviors that become a pattern in 69 86 . then you need to fill your own emotional bank account first. and do it well. If you more. Assignment Yep. But if you find that your talkative nature is more than just an oversight or a habit gone awry. read tive communication beideas 103 through 113 first.
If you project any of these consistently with people. and has to always be in control. then you’ve come to the right chapter. Each of these approaches can cause people to run the other way. and agneed an approach makeover? gressive or intimidating. you could be getting in your own way to success. you could be getting in your own way to success. or avoid you at every turn. reactive. Three that are sure to Reflect seriously on your cause you problems are being relationship approach. To get out of your own way and get on the path to success with people.Quick Ideas 68 to 70 time can also become your Assignment personal relationship approach. Douse the Domineering Calling all domineering divas! If you’re one of those people who just knows it all. you have to fess up: Are you projecting one of the three deadliest approaches? Epilogue If you project any of the three deadliest approaches. 70 87 . has an opinion about everything. Do you dominant. They can also cause you to live in an alternate reality—they can lead you to believe things about yourself that just aren’t true. Consider these the three deadliest approaches.
or you’re the guy who is always on edge. interpreting everything as an opportunity to blow. And. Don’t Be Reactive Reactive people make us all nervous. frankly. you know it. And these feelings of self-doubt run like a never-ending tape through a domineering person’s brain. Then read 77 adopt a more effective through 99 for more techniques to approach. This person rarely demonstrates listening. if you’re reactive. 71 88 . and instead put the focus on someone else. Epilogue To transform a domineering approach. And that also redirects your energy from talking to listening in order to learn what’s in someone else’s head. Either you’re the guy who is pretty evenkeel until someone pushes your hot buttons. far the most important. you have to practice listening—and lots of it. listening is by make over a domineering style. That’s right! You have to practicing listening—and lots of it.151 Quick Ideas to Improve Your People Skills Though there are a Assignment number of new habits you’ll need to develop to Read Idea 85. Domineering styles are really just displays of pent-up energy generated by low self-esteem and fear. and instead talks to deflect all possible and imagined threats. But by listening—truly listening—you break the focus on your own fears and self-doubts.
low self-confidence. and (3) learning to ask questions to understand (Idea 141). will resort to making personal attacks. 72 89 . and and self-doubt—particularly 141. And that makes you ineffective in the workplace and other parts of your life. then you’ve stopped at the right place. Tackle the Intimidator If you must win at all costs at even the first sign of disagreement. emergency techniques found in ideas 128. so you’re not getting honest answers. slinging mud. or acceptance. The first thing you need to know is that people are avoiding you. 131. They fear you.Quick Ideas 70 to 72 First know that your reAssignment active nature is driven by fear Read ideas 128. 131. you can start to get it form your style to one that is back on track with three much more effective. Then read 77 through 151. Epilogue You can nip a reactive outburst in the bud by holding your thoughts and tongue in check—in that order—and by asking questions to learn more. and 141. good information. These three must be used in combination. and calling people out in front of others. HowYou need them all to truly transever. (2) keeping your tongue in check (Idea 131). This last step helps you keep egg off your face—something reactive folks tend to wear a lot. by (1) keeping your thoughts in check (Idea 128).
We beThese are a good starting point in translieve you can change. forming the intimidator approach. 106. instant messaging. text messaging. Assignment Similar to the two other Read this book cover to cover. live interaction. By that we mean direct. it’s the deadliest of all when it comes to relationships— period! In fact. e-mail. you’re start with ideas 94.151 Quick Ideas to Improve Your People Skills But we want to help. and in other parts of life. daily. voice mail. you will have the most work to do. running on fear. you have to 73 90 . Change starts with knowing that your approach is not okay. and go each other: mobile high-touch! phones. But the most effective communication is not through high-tech. but high-touch. We’re talking face time here! To build good working relationships. but deadly approaches. and 109. Epilogue Projecting an intimidator approach makes a person ineffective in the workplace. Strive for Live Interaction Today there are Assignment so many ways to communicate with Give some face time today. but could end up holding the silent title of Most Improved among your coworkers. and so on.
and they need the other 80 percent of communication—body language and tone—that goes beyond words to make that connection (see Idea 133). not high-tech. Epilogue The most effective way to communicate with people is through high-touch. You just get more done when you meet face to face with people. and get in front of people. and we’re all a little more relaxed. we’re usually all a little nicer to each other face to face. and the mobile phone.Quick Ideas 72 to 74 get out from behind the computer. face-toface communication is the granddaddy of them all. They connect with faces and voices. problems. Practice Face-to-Face Communication From managing Assignment day-to-day relationMake it a point this week to have ships to resolving your most crucial conversations face to conflict and solving face. but people are emotional creatures (Idea 77). They don’t connect with a message on a screen. 74 91 . communication experts have been studying the most effective forms of communication. For one thing. That nurtures better communication. the texting keyboard. Technology is all about logic. for day-to-day interactions and for those times when situations are tense (see Idea 114). and face-to-face wins out again and again. For decades.
151 Quick Ideas to Improve Your People Skills Epilogue There is no better form of communication than face-toface conversation. actual phone conversation. At Least Make It Live All right. And you can’t get those from a sterile e-mail or text message. 75 Epilogue In a live conversation. That doesn’t mean a voicemail message. 92 . which provide a wealth of information about the health of a relationship. in which tone and inflection complement the words you and the other person are sharing. read on to Idea 76 to learn why we recommend using e-mail sparingly in working with others. So. at least make it live with face. In fact. The conversation has to be live. don’t discount it! Tone and inflection provide a wealth of information about the health of a conversation. You have to submit yourself to a flow of give and take. we understand Assignment that it just isn’t possible to hold every conversation with someFor those conversations one face to face. Hey. when you you just can’t have face to can’t. you get tone and inflection. make them live with an a phone conversation.
or have a live phone conversation. Michelle established a policy with her staff: If it takes more than two e-mails to resolve a situation.Quick Ideas 74 to 76 Beware of E-mail E-mail. then go have a face-tobuilding and maintaining face conversation. most often the tone of those words was misinterpreted. And e-mail left a lot to be interpreted from mere words on a screen. Months later now. the two divisions have a great relationship. They work more collaboratively. or get clarof communication for ity on it. The point? E-mail is no substitute for good ol’ fashioned hightouch communication. When her staff and another division found themselves with strained relationships. And yet it is it takes more than two e-mails the most ineffective form to resolve a situation. relationships. for example. 76 93 . So. and misunderstandings and hurt feelings ensued. they laugh more together. The younger Assignment generation swears by it— Adopt Michelle’s policy: If lives by it. Michelle noticed that the vast majority of communication between their two groups was done via e-mail. Take Michelle. and each division supports and encourages the other. then her staff must get up and walk down the hall to meet with the other division’s employee.
it would be a mistake much more to learn about deto approach them with that veloping solid and effective reexpectation. There is so son. you would be flat wrong. but creatures of emotion. “When dealing with people. and they interact through their emotions. Your words. lationships with people. and what actions you take.” People make up their minds about others and situations at an emotional level. And depending on what words you use. you can trip triggers that work in your favor. but all are working from an emotional level nonetheless. Remember That People Are Creatures of Emotion Though you may like to Assignment believe that people act from a place of rationality and reaKeep reading. you need to expand your horizons to include high-touch interaction. As the great Dale Carnegie once said. and actions all trip emotional triggers in people. remember you are not dealing with creatures of logic. or work against you. 77 94 . You have to remember that at all times.151 Quick Ideas to Improve Your People Skills Epilogue If e-mail is your mainstay of communication. Some are more in control of their emotions than others. In fact. body language. what body language you demonstrate.
You have to make deposits continuously.Quick Ideas 76 to 78 The following ideas provide you with a range of techniques and insights that can help you trip the right triggers—with authenticity and integrity that allows you to respect yourself and demonstrate respect for others. You have to do things with every interaction that puts money in their emotional bank accounts. one withdrawal with someone you have yet to build rapport with can put you at a deficit with that person so deeply that it could take years and multiple interactions to replenish it. and withdrawals very rarely. In fact. foremost. and always. and work to take their emotions and feelings into account first. When you understand that people are emotional creatures. 78 95 . you give yourself the best chance of making deposits—and avoiding withdrawals. Fill the Emotional Bank Account In working with people. Epilogue People are not creatures of logic. you have to Assignment build emotional eqRead on! uity. but creatures of emotion— first.
you have to build emotional equity with them at every interaction. and acting on it. Make Friends When it comes to making friends. Epilogue To be successful in dealing with people. Assignment Ralph Waldo When you discover a connection Emerson said it point with someone. In short. or stop by her office. the person found a connection point with you. best: “The only Drop the person an e-mail. conflict. or the person appreciated or valued something about you or your work.151 Quick Ideas to Improve Your People Skills The following ideas give explicit instructions and techniques for making deposits into people’s emotional bank accounts—and how to replenish them when you run into disagreements. Likely. is to be one. and flat-out confrontation. and let you know it. make a way to have a friend phone call. acknowledge it. 79 96 . some of your best friendships started up because someone found that the two of you had something in common.” And being a friend starts very simply—by finding a connection with the people you meet.
It will be well worth your effort. right? Actually. Then follow up afterward with someone whose point of view you valued. As you listen to those around the table. Once you’ve made this first connection. and if you’re going to be weak in one. 97 . find points on which you can connect with them. then take action to recognize that connection and build on it.Quick Ideas 78 to 80 Finding and acting on connection points is easy—opportunities are staring you in the face every day. Epilogue Look for ways you can connect with the people around you. or whom you admired for having the courage to speak up. for example. And as you add up and act on these connection points in time. doesn’t it? After all. Take the standard office meeting. and learn what you can do to improve your weak areas. you need a combination of both. it’s your technical skills that make you valuable. you’ll likely find that the person will return the support and encouragement down the road. studies 80 Assignment Take stock of your people skills. you’ll very likely reap the rewards of friendship. Develop Your Emotional Intelligence It does seem as though technical skills should be what’s rewarded. Where are your strengths and weaknesses? Leverage your strengths.
though you want to keep your technical skills sharp. you want to invest just as much. you also want to sharpen your people skills to a fine point. Likely it felt awkward. you could be a technical genius. Admit it—you like it when someone remembers your name after a first meeting. Epilogue You could be a technical genius. but if your people skills are lacking. They will respect and remember the emotional equity you build with them long after the value of your technical skill has become yesterday’s news. but. if your people skills are lacking.151 Quick Ideas to Improve Your People Skills show you’re better off to be strong on the people skills and weaker on the technical side. people are emotional creatures. We all do. So. and they don’t care how much you know until they know how much you care. Remember Names Nothing shows basic respect for people more than remembering their names. But when someone demonstrates he has made the effort to remember your 81 98 . It makes us feel valued and respected. Now remember a time when someone did not remember your name. That’s right. Why is that? Again. you will very likely find your career stagnate or plateau. In short. if not more time and energy in building up credits in your coworkers’ emotional bank accounts. you will likely find your career stagnate or plateau. and you probably looked on the person with a bit of suspicion or possibly even dislike.
Epilogue By remembering someone’s name. and using it to find connection points with the other person. it’s a first step in Remember one key developing trust with others. but and assign their names to has such a big impact on budthat picture in your mind. looking someone in the eye is one of those subtle yet very powerful acts of respect. names is such a small thing. ding relationships. you extend the hand of respect and lay the foundation for friendship. 99 . this is not a game of optical chicken. we communicate that we value someone else as a human being. an instant connection is Assignment formed. Looking someone in the eye is again one of those basic. Look ’Em in the Eye Okay. 82 Assignment Focus your energy on making genuine eye contact. In fact. absorbing information. to see who blinks first. With one word. yet often overlooked and underestimated ways to connect with people. characteristic about the Remembering people’s various people you meet.Quick Ideas 80 to 82 name. Similar to remembering someone’s name. the kind that comes only from truly being engaged in the conversation.
The art of multitasking is to blame. Give Your Undivided Attention Most of us today give subdivided attention. As a result. We’re busy checking the “crackberry” while a coworker is talking. So how do you cope with all the distracting demands that eat away at your attention span? Simple: stop! 83 100 . Focusing on all the superficial techniques of listening. in that one conversation. And people will know it. acknowledging. Epilogue Looking someone in the eye demonstrates your respect for others and confidence in yourself. If you do this. instead of undivided attention. That’s not authentic. we don’t invest the time it really takes to build good relationships with people. you’ll be much more productive in building relationships and winning people’s trust. or we’re responding to e-mail or doing some other act of unintentional disregard.151 Quick Ideas to Improve Your People Skills Forget all the superficial techniques you’ve learned about how to look at someone just above the brow line to show you’re giving that person your attention. And you’ll earn their respect. Looking someone in the eye comes naturally when you’re listening and engaged in the conversation. and providing feigned eye contact is often a distraction from doing what we really need to be doing—which is living in the moment with that one person. that’s BS.
Giving subdivided attention is the result of a bad habit left unchecked. and instead give people the hallway.” Then think Stop calling out to others while of ways you can stop doing you’re talking to a coworker in them. It does take more interaction energy. Resurrect it. and I want to give it my undivided attention. what you have to say is important. and your relationships will be stronger and last longer. 84 101 . shouldn’t you do it mind and spirits too? Being “present” is the best gift you can give those around you. Put your PDA away during the meeting. These are the techniques that say. Be “Present” It’s been said that 80 percent of success in life is just showing up.” Epilogue Giving undivided attention is a forgotten art of relationship-building. Stop checkhavior for a day. “Hey. but if you’re going to show up. This is true. you’re there to participate in the meeting. and make a ing your BlackBerry while list of your actions that say someone else is talking to you. your undivided attention. or tell her you’ll stop in to see her later in the day. Schedule time with a coworker who needs to talk. but it also generates energy as well. “I’m tuned out.Quick Ideas 82 to 84 That means stop checking Assignment e-mail while talking to people Observe your own beon the telephone.
This type of behavior attracts others to you. reap. one another’s humor. someone who lives in the moment of the situation.” 85 102 . making action just your own personal style to be plans. Then determine how you can adlaughing.” and thusiasm. someone who is present. playing off note how others respond to them. those who listen. Think present with those around you. and energized to get moving with the mission? There is a kind of magic that comes from people being present. mind.151 Quick Ideas to Improve Your People Skills Ever been in a meeting when all were Assignment present? People were Keep a log for a week of those sharing ideas with enaround you who are “present. and it’s an infectious feeling. Did you feel lighter. and finds the positive and possible in it. for your thoughts. your opinions. Practice Good Listening The Japanese have a simple but insightful proverb that’s worth posting somewhere near your desk. It goes like this: “Those who talk. Epilogue If you’re going to show up. good about the people. be “present”—in body. your advice. sow. and spirit. more connected. about how you felt when you left that meeting. and your company. and so on. Work on being that kind of person.
The fundamental purpose of listening is to gather information about the other person. By knowing more about people. you can have an excellent opportunity to learn. how she views a situation. And secondly. and so on. 103 . fears. reap. how many times you recap what the person said. And that can be very powerful—in several ways. but they are not listening in themselves. Why? Because we all want to be heard—it’s a common human desire. and so on. If you sit quietly and let others do the talking. those who listen. or defend ing. Those are techniques to help you become a better listener. desires. joys. or defendDon’t judge. to gather information. Epilogue Those who talk. in the conversation. to understand what she view. argue. listening allows you to learn people’s likes. about agreeing. or what he values. to understand where he’s coming from. Good listeners give people that opportunity. you become much more effective in working with them. dislikes. It’s not about how your point of view. sow. Listening is not thinks. people are attracted to good listeners.Quick Ideas 84 to 85 The most important Assignment element of good listening is simple: You have to Look for at least three opporwant to understand the tunities daily to listen to someother person’s point of one else. Most importantly. concerns. just listen— often you nod your head and learn. or how many affirmations you give to the other person.
you have the chance to make a connection. and that gives you insight into how they tick—what makes them feel they way the do. Though we three questions to ask somemay not want to share every one you’re meeting for the first aspect of our lives. you get to know people better. And when we know the people with whom we work better. we work better together.151 Quick Ideas to Improve Your People Skills Connect With People Through Questions Being heard is a natural desire for all human beings. Choose questions that for people to open the door will give you some immediand let us show them who we ate insight into who the perare. or about themselves or their work. but also put the person Asking questions of othat ease with you. what makes them act the way they do. ers opens that door. we send the message that who they are is important. When we ask others what they think about a situation. 86 104 . a viewpoint. Assignment And so is sharing information Develop a short list of about ourselves. aggressive. And. we do like time. As long as the questions are not prying. too personal. when you pair that opportunity with your good listening skills. most people will open up about themselves. In short. or judgmental. son is.
and aren’t working with others. observe those around agree the adage is true: Everyyou who freely give their one has one. it might be best to politely avoid the conversation or change the subject altogether—such as with politics 87 105 . Note how And sometimes it’s best to you feel about them. and Think about it. Withholding your personal opinions can be a challenge. Be Careful With Your Opinions Likely you’ve heard the old saying about opinions. personal opinions. how keep ours to ourselves—paryour coworkers and manageticularly our personal opinions. especially if someone’s views differ at the deepest level from your own.Quick Ideas 86 to 87 Epilogue Asking questions opens the doors for others to share who they are with you. but most people will week. You’ve met how effective they are in opinionated people. In cases such as these. It’s not Assignment appropriate to repeat it in this Throughout the next book. Don’t be one of these people. And if you are one of these people. keep reading. they annoying? Flaunting their personal opinions openly as if their view is the correct view? And these people usually have an opinion about everything and everyone. ment respond to them.
So how do you hold a give-and-take conversation in which you’re fully engaged to a make a connection—all without giving your opinion? Ask questions.151 Quick Ideas to Improve Your People Skills and religion. People who know you and trust you. be sure to shape your response so that your opinion doesn’t come off as the only view— because it never is. There are 151 smarter ideas in this book for getting the right kind of attention—the kind that wins you friends and influences people. the people you’ve made connections with. Epilogue Everyone has an opinion. However. in the course of your daily work. will see the value in your opinion and support its merit. when you pass judgment on someone. you adjust your behavior to align with your opinion about him or something associated with him. These. In short. And when and where you can agree. then listen. Remember: Giving your personal opinion is just a way to get attention. Withhold Judgment A judgment goes further than just agreeing or disagreeing with someone’s viewpoint. say so. Ask questions. are topics that strike at people’s most core values. And they are usually emotionally charged. and sometimes it’s best to keep ours to ourselves. Instead. 88 106 . as you know. then listen. When doing so. you’ll need to give your professional opinion. a judgment predetermines how effective you’ll be in working with someone.
or professional jealousy.Quick Ideas 87 to 89 And is that really fair? Assignment We don’t like it when someMake a list of the people in one judges us—particularly your office with whom you have when that person doesn’t difficulty. their lives. their career. See Both Sides There are indeed two sides—or more—to every issue or argument. Epilogue Making judgments of others limits how effective you will be in working with them. we have all judgments holding you back been stung by the feeling from developing a more effective of being judged by someworking relationship? one who doesn’t have all the facts. gender. what they want to accomplish with a project. At you made about them? Are those some point. or lifestyle prejudice. And people will trust and respect you if you show them you can see the different dimensions of a situation. or by someone who suffers from ethnic. how do you withhold judgment of others? Get to know them. You may find you have more in common than you had imagined. So. and not just the one you prefer. What judgments have really get to know us. Judging others gives us some misguided permission to dislike someone—or worse—and to act upon it. ask them questions. Invite a conversation. 89 107 . and so on. find out what drives them.
In fact. what she knows or feels comfortable with. people who can see both sides of an issue are usually viewed as more credible. you show Consider a coworker or family people you are fair. Think of ally admire the most. When we open ourselves up and see differences in views. or approach to the situation. Next. And we gain a golden opportunity to learn about those around us. interested person bent on seeking the middle ground. member with whom you are at thoughtful. Epilogue Seeing both or multiple sides of an issue or situation shows you are a fair. some questions that can help you It’s easy to align ourget better insight into that person’s selves to the familiar. 108 . and open to having their minds changed—and people naturally gravitate to them.151 Quick Ideas to Improve Your People Skills By actively seeing Assignment both sides. and a variety of solutions. They’re open-minded—open to change. also with your best who can stretch beyond intentions in play. and practices. to argue against the unplan an opportunity to politely ask familiar. But the person your questions. approaches. is truly the gifted among us. we open ourselves up to a range of possibilities. and respectodds. or seek them out. and then think about the up the people we usuother person’s viewpoint. Think about why you are at ful—all traits that make odds. and seek to understand a different view or experience.
Sometimes Mom’s adage is true: If you don’t have anything nice to say. or colleague. people listening to you run someone else down are secretly wondering if you do the same to them when they’re not around.” And things you can say about all of when others aren’t around your coworkers. Write down three positive meaning “to build. such as: “I know John can be direct. boss. Never tear someone down. instead.Quick Ideas 89 to 90 Edify.” or “Lisa has had a rough year.” When people hear you edify others who are absent. Edify The word edify comes Assignment originally from the Latin. It’s bad manners and poor form to run others down. find honestly positive things to say. The next time but become a topic of disyou’re tempted to run any of them cussion. that’s exactly what down. always speak positively about her. but she’s bringing a lot of value to our team. Prove them right! 90 109 . Any time you speak about a coworker. And though most will never say it to your face. let the positive words you’ve you should do: build prepared come out of your mouth them up in their absence. then don’t say anything at all. Edify. But if you must say something. or demean someone who is not around to defend himself. they trust you will do the same for them when they are not around. but he’s very intelligent. gossip about.
We all same information with you. But ever met someon this second step. it’s true that we someone. Usually people who are brutally honest lack something crucial in developing relationships.151 Quick Ideas to Improve Your People Skills Epilogue Remember: When you sling mud. Give Honesty With an Equal Dose of Compassion It’s been said that Assignment honesty without compassion is cruelty. then consider how you do prefer people who are would like someone to share the straight up with us. often. Think of a situation in which you feel you must be honest with Yes. who feels at ease saying whatever comes to mind—all in the name of “just being honest”? There is a wide difference between “telling it like it is” and compassionate honesty. It’s understanding 91 110 . It’s called tact. people are unaware that their behavior or actions are rubbing us the wrong way. you’ll find a one who prides herself on compassionate way to be honest brutal honesty? Someone with someone else. If want to know where we you’re really honest with yourself stand. It’s knowing that. it’s a measure of sensitivity. But just what exactly is compassion? In short. you always get some on yourself. And tact stems from a sense of compassion.
Epilogue When you find you must tell it like it is.” tell it with compassion. tell it with compassion. an atmosphere such that all are welcome and valued. purposefully ask for her viewpoint. When you see someone hanging back in a meeting or group discussion. By helping others to be heard. and encourage her to participate. or feel intimidated in group settings. Let the person know you have his best interests at heart. you have the right ingredients for friendship and respect. And that difference doesn’t equate to inferiority. Share facts without judgment. Help Others Be Heard Sometimes it falls to us to help others to be heard. When you mix honesty with compassion. Make a concerted effort to help her feel safe. you foster an environment of inclusion. So when you find you need to “tell it like it is. and you want good things for him.Quick Ideas 90 to 92 that people come from different walks of life. And they need help from more assertive coworkers such as yourself. and choose language that demonstrates you believe the best of this person. or family. or what she thinks about one of your ideas. Be on the lookout for these folks. Not everyone is assertive. 92 111 . foreign. Too often they can be bulldozed by the aggressive or dominant personalities in the bunch. Some people are shy. and their behaviors are a result of their culture—domestic.
In these cases. There are lots of reasons for this. you have the opportunity to be a translator and a champion. Sometimes people have difficulty articulating their ideas. or from an outside time you are in an office organization. you can set the stage for someone to shine.151 Quick Ideas to Improve Your People Skills It’s particularly important Assignment to extend this effort to those new to the group. And you are sure to gain an admirer in the process. When you help these coworkers to be heard.and junior-level emto participate—and make an ployees. Help Others Be Understood It also falls to us sometimes to help others to be understood. Epilogue When you help others to be heard. through. you create an environment of inclusion in which others feel welcome and valued. And it’s especially meeting and identify those critical when you have a mix of who have not had a chance senior. or they are an outsider for some reason and need a champion. 93 112 . Senior-level employees opportunity for them to be are used to elbowing their way heard. that’s how many got where they are—right or wrong. Junior-level employees who are not used to working with senior leadership may feel particularly intimidated. from another Scout the room the next division.
Quick Ideas 92 to 94 But you want to approach Assignment these opportunities with delicacy. Seek out opportunities and not dominance. whether that is as a know-it-all. is powerful. and it’s an act of respect. the group. to get their point across. or to demonsee a colleague’s input being met strate their unique contriwith puzzled expressions from bution to the group. you will help your colleague get his point across. summarize what you think the person is saying and then ask if your summation is correct. It may require some give and take for a few minutes. you have an opportunity to be someone’s champion. Mishandling to help others to be unthese situations could paint you derstood. 94 113 . Your colleague will feel as though he has an advocate. if handled with sincerity and good intentions. but if you’re patient and genuine about understanding. and he will remember the gesture. So how do you proceed withto clear up a misunderout making a mess of it? Think standing with the boss or facilitation. Voila! You’ve made a connection! Epilogue When you help someone to be understood. This technique. Allow People to Save Face In Asian culture in general. It’s called saving face. it is considered better to be silent than to point out someone’s error in front of others. This is a cardinal rule in a number of Eastern cultures. The next time you a coworker.
You don’t like to be told you are wrong in front of others either. but being told in front of others is just adding insult to injury. Encourage All people need encouragement.151 Quick Ideas to Improve Your People Skills In Western culture. When they hear ideas or statements of fact that are off the mark. 95 114 . Always allow others to save face. Epilogue Being told we’re wrong is never easy—but being told in front of others is just adding insult to injury. and let their colleagues save face. It is the fuel that keeps us going. Just as your colleague will appreciate you for helping her to be heard or understood among others. they address those in private with the person—not out in the open for all to see and hear. Think about it. We’re more rule of saving face. that helps us pick ourselves up when we’re feeling down. leeway to be wrong. she will also appreciate it when you let her save face among them as well. Watch interested in these bragging how others respond to this rights than giving another some person. more interested in being Observe a colleague right. Learning that we’ve been wrong in some way is never easy to hear. more interested in showwho does not know the ing our expertise. we’re Assignment often more interested in winning. But those who have strong people skills often take a lesson from the Eastern cultures.
look to words and perspective to help you. or be unrealistically optimistic. being overly optimistic will make others doubt you. Encourage With Words and Perspective When you are striving to encourage someone else. and then lift them up. and helping others to see it for themselves. To offer encouragement is not to forgo reality. mixing it with reason.” And it is the fire that spurs us to greater heights and drives us to stretch ourselves farther than we ever thought we could travel. You can be a source of encouragement for others. Epilogue To genuinely encourage others is to help them see what is and what can be. Listen to them. understand them. the mirror that helps them see their strengths when all they see is their weaknesses. thinking you lack objectivity. But to genuinely encourage others is to help them see what is and what can be. In fact. It is about taking the truth. Encouragement is the balm that covers our hurts and gives us faith that “this too shall pass. and allows us to keep moving forward. the fire that fuels their dreams and launches them to new endeavors. 96 115 .Quick Ideas 94 to 96 Assignment Look for opportunities daily to encourage others. You can be the well people come to when they need a dose of inspiration.
we often hear people make comments about themselves such as “I can’t ever seem to get it right. Phrases such as these are overgeneralizations or exaggerations. and you need to check them with your own observations: “Mary. Help her see where her words might be off the mark. historical throughout history. You get it right a lot. that’s not true. You’re just having an off day/week. Step up and use these tools to build emotional equity with someone today! 116 . the boss values you.” Or. such as www. For example. Epilogue Your words. or spiritual icon. sometimes you might just need to write down what you value about that person and send it in a card to be read again and again. And someone in need of a lift. look for opportunities to put her thoughts into a realistic perspective. When listening to a colleague who needs a boost. When you see the opportunity to illuminate his situation with encouraging words and perspectives.151 Quick Ideas to Improve Your People Skills Sometimes encouraging Assignment words might be something as simple as “hang in there” when Check out Websites someone is having a bad day. and these whenever you see put it into an e-mail.” Let these words be a red flag telling you to step in. such as a tations from people writer. He’s been a little cross with everyone the past few days.info for convenient track down a quote from an access to encouraging quoinspiring source. do it. philosopher.inspirationalOther times you might need to quotes.” Your words and perspectives can be a light for someone during a dark week. and observations can help encourage others. Share figure. He’s just having a rough week. insights.” or “the boss really hates me. “Mary.
do it with sincerity. enough. keep little recognition for a job a list throughout the week of well done. Your note will boss needs a pat on the back be one of the first they will occasionally. they usually don’t either. a pat on the back. a For your records only. Be that person every chance you get. Be on the lookout for good works. Then make it a habit And sometimes it falls to to take the last hour of every us to give it to others—coFriday to send an “attaboy” workers and bosses alike. Remember. So it sometimes falls to the coworker—the one who works most closely with us. Of course. Middle and frontline management lead by example. but you would probably be surprised to find how little top executives recognize their management when they hit the mark. this often trickles down. and give a pat on the back whenever you can. who sees our daily struggles and minute victories—to pat us on the back and let us know our efforts have not gone unnoticed. e-mail of recognition to those Yep. We often think see when they start their day the boss is recognized on Monday. again: no sucking up! People can smell phoniness a mile away. Of course.Quick Ideas 96 to 97 Pat Others on the Back Sometimes encouraging Assignment someone is as simple as an attaboy. 97 117 . and if their boss isn’t handing out pats on the back. coworkers’ and the boss’s good works. that’s right: Even the on your list.
you never give up. And being a cheerleader means you have to nurture people with attaboys. sometimes day in and day out. 98 118 . It could mean you have to keep dusting her off. Being someone’s cheerleader requires you to plant seeds of hope with that person regularly. You will need to do it consistently throughout time.151 Quick Ideas to Improve Your People Skills Epilogue Be on the lookout for good works. you will gain someone’s lifelong respect. Cheerleading for someone is more about driving them toward a goal. It means that as long as you see that person earnestly striving for her goal. and putting her back on track. Being a vestment of energy and diligence. And the reward you will gain is usually more long-term than short-term. It takes commitment. and insight. encouraging words. Parents and successful leaders know this to be a fact. It takes a lot be selective. cheerleader takes energy and diligence. and give a pat on the back every chance you get. it can mean you have to pick her up out of the doldrums again and again. When you commit to being someone’s cheerleader. But stretch. Being someone’s cheermore than words and leader is a long-term and intense inlistening. Be a Cheerleader Being a cheerleader Assignment for someone is a big Find someone to cheer on.
Help Others Achieve Their Goals Goals are often the Assignment guidelines of our lives. for example. Epilogue Being someone’s cheerleader requires you to plant seeds of hope with that person regularly. long after you have lost touch and your working relationship is a thing of the past. or crossing a lifelong personal bridge. you will gain someone’s lifelong respect. He 99 119 . Take Michael. earning a college degree. a new job. But when you commit to being someone’s cheerleader. Your investment of energy and diligence will be remembered. Look around the office for on the path to what we those who seem downtrodden and want the most. whether it’s a promotion. And we help them define and set their often need people to goals. his savings drained. and encourage them to achieve them. You can do this for others. we first have to define them. and his spirit numb from a life of strife and stress. he will remember you years down the road.Quick Ideas 97 to 99 And when that person achieves his goal. where you can to achieve their In reaching our goals. In fact. At age 50 he found himself downsized from a career that had never lived up to his expectations. They keep us focused. that someone you cheered on will remember you. Then help them when and help us reach them. goals. sometimes day in and day out.
And. Michael realized that his healing had to begin with reviving a lifelong goal he had long thought was dead: He wanted to finish his college degree. or the CEO? Often we are turned off by those who earnestly seek the spotlight. to do it for himself. But it always ate at him. In short. such as the boss. he was the walking wounded. Goals do that for us. we admire those who consistently point out the achievements and accomplishments of 100 120 .151 Quick Ideas to Improve Your People Skills moved with the weight of the world on him. ironically. Epilogue When we help others achieve their goals. he walked straighter. his shoulders hunched. And when we help others achieve their goals. to prove to himself that he had no deficiencies. he had made his way in the world for himself and his family for many years without it. Is it because you want others to notice you. Within just a few weeks of starting back at college. Working toward his goal gave him a focus. That same coworker encouraged Michael to go back to school. alone. An articulate and intelligent person. his head hung low. Let Others Shine If you find yourself always seeking the spotlight. we help them realize a piece of their purpose. ask yourself why. and hope. coworkers. we help them realize a piece of their purpose. But with the help of a caring coworker. He smiled and joked more. Michael’s demeanor began to change. a path to follow. and he looked people in the eye with confidence.
which you can recognize or recomTake top quartermend a coworker—then act on it. And being a team player will gain you more in the long run than striving to be noted as the team superstar. back Peyton Manning for example. and how often others will return the gesture. You might be surprised at the mutual admiration club you will create.Quick Ideas 99 to 100 others. Epilogue When you make way for others to shine. figure out ways each day to let someone else bask in the limelight. In short. pointing out someone’s accomplishment in the staff meeting. You can let someone shine also by recommending them for a high-profile project. Though he is the best of the best in the National Football League. you show you are a team player. or sending an e-mail to the whole team recognizing a coworker’s help with a recent project. and to remark on their contributions to winning the game. in interview after interview with the media he seeks out opportunities to compliment his fellow players. Instead of investing your energy to get yourself in the spotlight. or nominating them for a company bonus or award. Letting someone shine can be as simple as giving someone the floor to present a new idea. we reAssignment spect those who seek out ways to let others Make a list of opportunities in shine. Creating opportunities to let others shine will demonstrate that you are a team player. 121 .
For example. You will find your own spirits lifted when you take time to celebrate life’s moments. did a coworker just buy her first house? Did someone pass a professional certification exam? Did a group just come through a tough project victoriously? Celebrations can be acknowledged with a cup of coffee. So why not look others—then start planning for reasons to celebrate? those celebrations. if your coworker just purchased her first home. after all.151 Quick Ideas to Improve Your People Skills Look for Reasons to Celebrate Life should be a celebraAssignment tion. or a full-blown celebratory dinner complete with cocktails. you could show your joy for her with a congratulatory box of donuts and a card. The reasons can be both professional and personal. For example. celebrations give you an opportunity to let others shine. a plate of cookies. In short. spend on the job—and how and this time jot down events hard we work—the office is a and accomplishments in the great place to create celebraoffice you can celebrate with tions each day. point out people’s hard work. And given how much time we all Get out the list again. It’s a gift. 101 122 . but rather to create a moment for others to stop and smell the roses. and let them know they matter. The point is not about creating an extravagant ceremony.
Have everyone tell a funny story they remember from that person’s first year in the office. Acof people’s birthdays and anniknowledging anniversaries versaries. or just remotely from another location. Of course. and Such Birthdays and anniverAssignment saries are ready-made opportunities to celebrate—and a Set an alarm on your elecgreat way to remind people tronic calendar to remind you that they matter to you. to let others know they are remembered. 102 123 . Anniversaries. Remember Birthdays. a short e-mail wishing someone a happy anniversary or happy birthday is just as special to coworkers working halfway around the world. and you can turn it into a fun trip down memory lane. get the team together for a short celebration to remember someone’s five-year office anniversary.Quick Ideas 101 to 102 Epilogue The point of creating celebration is to make time to stop and smell the roses with others. Then reach out to them and birthdays is also a way on those special days. For example. Or organize one big birthday pitch-in lunch for all those celebrating birthdays in a given month.
And there are several ways you can do that.151 Quick Ideas to Improve Your People Skills The point. Taking care of your own emotional needs is highly important when you take on the challenge of practicing excellent people skills. You also need to know when you’re approaching “people burnout. how to feed them. birthdays and anniversaries are a way to connect with people. if you’re running on emotional empty. The first is to know what your needs are. again. Epilogue Use birthdays and anniversaries as yet another reason to connect with people throughout the year. but rather to acknowledge people on special days. And when you deal 103 124 . keep your own emotional bank This is absolutely critical. and how to surround yourself with people who help you keep your emotional bank account filled. The folcounts. you have to make lowing ideas discuss ways to sure yours is filled as well. and to celebrate knowing them. Fill Your Own Emotional Bank Account To keep up with the Assignment continual task of building up others’ emotional bank acContinue reading.” and what you need to do to cope with it. In short. account filled. you will have nothing to give to others. is not how elaborate you can be. because.
a good push forward. The trick is to be self-aware enough to know the signs. you will experience people burnout to some degree. it’s inevitable. as defined by the Myers-Briggs assessment. If you don’t. Epilogue If you run on emotional empty yourself. Being aware of these two inMake sure you take care of dicators can help you know the basics. night’s sleep as often as possible. However. and some alone time. you will have nothing to give to others’ emotional bank accounts. then you will need additional alone time because solitude is how you recharge your batteries. you could undo a lot of great work in a very short time. and take your energy. and when to square meals a day. Feed Your Own Needs Be careful of running on empty. it’s hard to rebuild. extroverts on the Myers-Briggs 104 125 . you could burn stock of when you know out on the “people thing” pretty you’re approaching burnout.Quick Ideas 102 to 104 with people enough over your career. If you’re an introvert. quick. Take time to reflect on depending on where you get what recharges you. Like death and taxes. And when you’re starting from less than zero. and. Managing relationships Assignment with people takes energy. and to head it off before it escalates to a full-blown case of burnout. such as getting three when to stop.
Call on Your Support Group When you start feeling yourself lose steam in dealing with your coworkers. Reach out and let them do what they were put in your life to do—support you with friendship. conflict and confrontation with others usually are not far behind. or life in general. The point here is to feed your individual needs. Know what it takes to help you maintain balance in your own life. call on your support group to help you sort it out. Epilogue Remember to stop and feed your own needs. And when you get off kilter. the friends who put things into perspective for you. you will be off kilter in dealing with others. Stop and refuel. 105 Assignment When you feel as though you have hit a wall with the “people” part of your job. Do not push yourself when you are running on empty. You know who we mean: the friends who help you laugh at yourself and your situation. take a step backward and take care of yourself. When you hit a wall. 126 . and take care of yourself. If you’re out of balance.151 Quick Ideas to Improve Your People Skills scale tend to need frequent interaction with people to rev up their energy levels. or your personal cheerleading squad. call on your support group. recharge. Know what you need to maintain balance in your own life.
Quick Ideas 104 to 106
This could be a group of friends, your best friend, a close sibling, a parent, or a mentor. Striving to have good relationships with coworkers can be draining, and sometimes leaves us in doubt, or in need of an ear to bend for an hour or so. Sometimes they can simply help you take your mind off a situation for a while. They can make you laugh, remember better times, and paint a more optimistic picture when people and situations bring you down. Be ready for this very real possibility: Managing relationships with people will have its down moments, no matter how astutely you deal with others. People are people, and you cannot control them. You can only control yourself, your reactions, and your attitude. Your support group helps you do just that.
Epilogue
Remember that, when you deal with people, you can only control yourself. Your support group helps you keep all systems in check.
Keep Honest Company
Though we want friends who will listen to Assignment us without judging or critiTell a friend this week that cizing, we also want them you appreciate his honesty, and to be honest—and call us value his willingness to set you on our own bull when we straight when you need it. need it. These are the friends who are comfortable enough to set us straight when we need it, but who also give us room to make our own choices and mistakes.
106
127
151 Quick Ideas to Improve Your People Skills
This is the kind of company you want to keep. As you strive to improve how you interact with people, your honest friends will make the best sounding boards. They are the ones who will keep you from letting rationalization persuade you that a bad action is a good one. And they will tell you when you mess up. Precisely because they are daring enough to call you out when you fumble, they are also the ones you can truly trust when they tell you have done something right. In short, you want your support group to be made up of honest people, those who have your best interests at heart, and will steer you in the right direction when it looks as though you might be veering off course. These are the friends worth having.
Epilogue
An honest friend is one who will speak up and tell us what we need to hear.
107
Get Inspired
The people in our lives who see us as bigger and bolder than we see ourselves are the best friends to have. These are the people who inspire us—they tell us to believe, they show us how to believe, and they convince us to believe. People who inspire us are like a well, filling us up with
Assignment
Plan lunch or coffee this week with a friend who inspires you and lifts you up. He or she can help you rise above the pettiness of the workplace and inspire you to be more than you already are.
128
Quick Ideas 106 to 108
courage and creativity. They give us direction, and they get us moving forward. And we all need them in our lives. In working with people, particularly difficult personalities, day in and day out, you can lose sight of who you are—and where you’re going. Managing your people skills is work, and it can be tiring work that can leave you feeling empty some days. On those days when dealing with your coworkers, boss, or anyone else, who has torn you down, find time with a friend who inspires you. Let her remind you of who you are, from where you have come, and what you can be. When you feel inspired, you can find your courage, your creativity, and your compassion—all the skills you need to be your best with people.
Epilogue
An inspiring friend reminds you of who you are, from where you’ve come, and what you can be.
Find Friends Who Edify You in Your Absence
You need friends and colleagues who have your back. Just as you should edify others (see Idea 90) in their absence, your friends should as well.
108
129
Assignment
Take stock of your closest office confidantes. Would they have the courage and inclination to defend and edify you in your absence?
or failed to defend you when maligned by another colleague. you owe it to yourself to associate with others who will do the same for you. By “role model” we mean a class act. Nothing is more deflating and depleting than learning a coworker has dogged you out of earshot. Epilogue Just as you want to edify others when they are not present. those who are truly your friends should deflect mud when it’s being slung in your direction. In this case. For example. When you spend time with role models. you have probably gotten the point multiple times that practicing effective people skills is hard work. someone who models what you want to practice. Find a Class Act to Follow By now. you get to see examples of effective people skills in action. the famed basketball 109 130 . It takes diligence and commitment. and will speak to your positive attributes and defend your honor when you’re not around.151 Quick Ideas to Improve Your People Skills These are the kinds of people who will defend you if you’re spoken of out of school in your absence. These are the people who will keep your confidences. and you should do the same for them. you will want to seek out role models who already demonstrate good people skills. in your absence. In short. Those in your support group who cannot do this should be removed from the list—and added to another. many business and sports leaders admire John Wooden. and role models.
But when you add in the people factor. Epilogue With repeated exposure to a class act. you’ll find yourself becoming like the company you keep. When you are feeling unsure in a situation. and imitate him. 110 131 . Take a “People Break” Life in general today can wear you down. But with practice and repeated exposure to a class act. and find ways couragement. and enpeople skills.Quick Ideas 108 to 110 coach who is considered an allAssignment around class act. Wooden is noted as a polished individual Identify those in your who approaches people with circle who demonstrate solid compassion. listen to them. honesty. that. it gives you living examples to follow. you can really find yourself feeling drained. we’re telling you to “fake it to make it” on occasion. and all the energy it takes to deal with them effectively. you can pull from memory how one of your role models handled a similar challenge. you’ll see yourself becoming more like the company you keep. he is noted in history as Watch them. Yes. of the most effective and relearn how they think. and vered coaches in sports. Being exposed frequently to a class act gives you a pattern to follow. And because of to spend time with them. then imitate their polished skills every chance you get.
or a few hours if possible. Making the effort to dial down your interaction levels at these times can be critical to your success—both professionally and personally. take an interaction break. Of course.m. That means limiting how much interaction you engage in with others for the remainder of the day. you just don’t have to indulge in in-depth conversations. You can still be polite. The point is to back off when you’re feeling as though you’ve reached your people limit for the day. and your calendar is taunting you with four more meetings to come. If you know you’re at the end of your rope and it is only 10:30 a. and your tolerance levels with people are Think back to times when approaching the bottom. rather than your speaking skills. Change your scenery. get out of the office for a quick walk. a spat with a coworker or say When you start feeling that something you regret? you are tapped out with people. 132 .151 Quick Ideas to Improve Your People Skills Be alert to when your batAssignment teries are running low. Just go off to yourself for a few minutes. take a people break. Epilogue When you start feeling tapped out. take a people break. and commit a careeryond that limit? Did you have limiting move. This might be a good time to practice your listening skills. sometimes getting away from the office just isn’t possible. What happened or do something you will later when you pushed yourself beregret. These you reached your people limit are times when you could say for the day.
read. you might want to read up on intergenerational relationships in the workplace. Take time out every now and again to add to what you already know. you will find yourself handling prove how you relate to people like an expert. 111 133 . what you put people and people skills. or explore an area about people about whom you would like to learn more. If you apply what you as you’re working to imlearn. human psychology. if you’re a Baby Boomer working with GenXers or Millenials. but it will also expand your vocabulary and give you a broader range of words to choose from in relating to others. For example. And. to get a list of books about health. Then set a into your mind also afgoal to read one book a month for fects how you think.Quick Ideas 110 to 111 Sharpen the Saw by Sharpening Your Mind Read. or how to manage conflict. or your local limouth affects your body’s brary. read! Reading helps to keep Assignment your mind sharp. Just as Get on the Web and check out what you put in your an online bookstore. just select something in which you’re interested and head to the local bookstore or library. So the next year. people. Adding more to your mental repertoire through reading will not only help you expand your understanding. of course. Don’t make it a grueling assignment. the Internet is filled with quick reads and articles that can be helpful as well. read up on topics such as communicating more clearly.
time to take a people make it a point to get out at least break. eiAssignment ther alone or with a friend. The lunch hour is a great If you’re a desktop luncher. read! Get Away From Your Desk for Lunch Get out for lunch. or tuck away in a coffee shop and read a book. surprised how much better you Step out in the sunwill feel after a leisurely lunch shine. refresh. take a walk through away from your desk. then bring your lunch and find a quiet spot in the office.” To play. You will be second half of the day. If you don’t like eating alone in a public place. Read. such as a conference room. the park. and so forth. or the strangest thing to happen to you that day or 112 134 . Take along the iPod and listen to your favorite tunes while you munch on your favorite sandwich. then twice next and prepare to take on the week. Epilogue Take time out to add to what you already know about dealing with people. Or bring a friend along and play a game of “That’s Wild.151 Quick Ideas to Improve Your People Skills In short. once this week. you simply share the most bizarre thing you’ve heard all week on the news. read. sharpening your mind is a great way to sharpen your people skills. to refuel.
you cannot talk office politics or gossip. or attend a church event.Quick Ideas 111 to 113 week. and plan one social event a Applying your people skills week. Which social activities you attend are completely up to you. and get a change of scenery and perspective. whatever you share has to be funny or comically strange. The second is. The point is that. so recharge your batteries with some time out with your girlfriends. get moving. You’ll come back feeling refreshed and ready to tackle the rest of the day. guy friends. There are only two rules: The first is. get up. Go! Do it now! all day at the office can be draining. In fact. get some chow. or significant other. Epilogue When lunchtime rolls around. you need to get up. Take in a game. And you need balance to keep your people skills on track. Attend Social Events Get out your social calenAssignment dar and start making plans. and get out. but you need them mixed into your week to keep you balanced. head for the local pub. go to a concert. when lunchtime rolls around. They let you express 113 135 . you should plan one soGet out your calendar cial outing a week—or more. Social events give you a chance to let your hair down around people you know or who have similar interests. get moving.
and sometimes confrontations. But what about Keep reading. and you cannot work with people 52 weeks out of the year without having occasional disagreements. Add them to your weekly routine. and they give you an opportunity to make new friends and increase your circle of connections. a bright spot in the week to which you can look forward when the workplace drags you down. how to get along with others. But when these are handled correctly. excellent-people-skills mission? We’re glad you asked. able to handle more than 114 136 . for Assignment the most part. Epilogue Social events can be a light at the end of the tunnel for a tough work week. they don’t have to derail your efforts.151 Quick Ideas to Improve Your People Skills yourself. Handle Conflict With Confidence Up to now we’ve covered. Conflict is a natural part of life. Managing your interpersonal skills through conflict can actually make you a stronger communicator. Think of your weekly social event as the light at the end of the tunnel. they can make your relationships with people stronger. This question shows that you have realistic expectations of your relationships with people. There is those times when disagreement much more for you to learn hinders or threatens to derail our and put into practice. conflicts. because that means you’re smart. and more honest. In fact.
And. no. maintain profesdisappointed. Epilogue Conflicts. And there are people who don’t want to play by the same rules. After all. They don’t think they have to be considerate. So. conflict will arise. and insights. you will be able to handle them like a pro. people to always get practice your listening skills. and limit your interacpeople will never get tion with this person. can actually lead to stronger. along because they do not want to put in the work necessary. Not always. Some people just don’t want to make that effort. Some sional courtesy. And with the following tips. yes. you will be in these situations. and so on. getting along with others in a way that creates a win-win daily takes a lot of work. Can’t We All Just Get Along? Can’t we all just Assignment get along? Unfortunately. Others may believe themselves to be superior to others for 115 137 . more honest relationships. Speak less along. respectful. handled appropriately.Quick Ideas 113 to 115 just the convenient and easy situations. compassionate. When you encounter someone who If you expect refuses to extend the hand of respect. someone who can face conflict and disagreement productively is a much more effective employee than someone who just avoids it. techniques. as you have learned thus far in this book. fair.
If you have made a sincere effort to make a connection with her and she does not return the effort. And we all have to contend with these folks at some point or other—in some work environments. of course. be polite. you cannot 116 138 . The point is to accept that we cannot always get along with some people. It is an unfortunate fact of life. you have nothing to feel bad about. you may have to contend with them every day. what do to? When you encounter someone who repeatedly shows you she is not willing to do her part to get along with you. wealth. 365 Opportunities for Conflict— 366 in a Leap Year Every day and every conversation is a potential for conflict. be respectful. Though you may enter each with good intentions. It’s a fact of life with which we all have to contend. either due to position. As long as you take the high road. social status. But you will encounter others who are. Just continue to be yourself and get on with your day. Accepting the fact that people will just not be inclined to get along with you or others is not to say that you should be on the lookout for these folks. but limit your interaction with this person. people will not always get along. Epilogue Manage your own expectations.151 Quick Ideas to Improve Your People Skills some reason. or religious beliefs. then the problem becomes hers. politics. So. You are not one of these people. authority.
With 365 days in a year. you experience it. but which to use depends on the situation. and 366 in a leap year. The foolish person seeks to avoid it altogether. The following you also need to temper your ideas describe strategies for expectations when it comes to how to manage conflict when conflict. The smart person has a strategy ready for when they arise. Read on. Some conflict comes from misunderstandings. because arise they will. So as with getting along with others (Idea 114). The foolish person seeks to avoid them altogether. the smart person has a strategy ready to handle the situation. And some comes from the people described in Idea 114—they just feel they have no responsibility to get along with others. there is much opportunity for you to experience disagreements or confrontations with others. Each of these requires a different strategy. You just need to know which strategy gives you the best chance of turning the situation around. 139 . There are several strategies for diffusing conflict and turning it into something productive. Some comes from a lack of information or a lack of communication. Epilogue When it comes to managing conflict with people. and most can be diffused and converted into rapport over time. or miscommunications.Quick Ideas 115 to 116 control others’ responses or reAssignment actions.
you both have an opportunity: You have an opportunity to learn from where the other person is coming. correct a misyou help me understand from communication. amazed at the results. stop and straighten out a misunderask this one magic question: “Can standing. if your finance colleague sees a situation one way. Likely. or a miscommunication. Because neither extended the olive branch of understanding to the other. For example. We have all seen this in the workplace: two coworkers who disagree. the current relationship started due to a misunderstanding of intentions. and the situation has festered and infected every interaction they have. it can present you with a breakthrough. Conflict can actually be the catalyst for moving forward. which is a journey of constant detours around the real issue—and missed opportunities to resolve it. People who avoid conflict or confrontation usually take the duck-and-dodge approach. whether The next time you encounter to gain better clarity. conflict with someone. your finance 117 140 . and you see it another. but no one really remembers why. You will be with someone. But when you accept the conflict at hand and ride it out on the high road.151 Quick Ideas to Improve Your People Skills See Conflict or Disagreement as an Opportunity One strategy for managing conflict is to see it as Assignment an opportunity. or simply where you are coming?” Then listo know where you stand ten to understand. they have found themselves in a downward spiral of egotistical pride and unresolved misunderstandings.
you have an opportunity to turn it around. they avoid the person thereafter.Quick Ideas 117 to 118 colleague may be trying to tighten expenditures in your division— and others across the board—so the company can avoid layoffs. Epilogue Conflict and confrontation can be catalysts for moving forward in a given situation. and with whom we’ve built a rapport in time. In reality. See Rough Starts as an Opportunity Yeah. a month ago? Don’t let any wrong. put our foot in Have you had a rough start our mouth. Go find the perWhen this happens son and smooth over the situation. and reshape it into something valuable. These folks practice avoidance behavior. Some people let their pride or embarrassment rule them in these situation. When you approach conflict with such an attitude of learning and understanding. with others we’ve known for a while. 118 141 . we get a margin of forgiveness. more time pass. But when it happens with people we’re meeting for the first time. it often results in a rough start. your colleague is looking out for you and others. we’ve all done it—stepped in it right out Assignment of the gate. said something with someone recently—a week insensitive or just flat-out ago.
if not outwardly. This doesn’t mean you have to grovel or supplicate yourself. 142 . breathe. Why? Because we’ve all been there. stop talking and pairs your ability to reason. when your adrenaline Assignment is pumping.” Most people will laugh. and that imdander up. When things start out rocky.151 Quick Ideas to Improve Your People Skills This is the wrong approach. your brain gets Next time you find your less oxygen. The simple act of breathing will get oxygen flowing to your brain cells. we’ve all done that. Epilogue Circling back to smooth over a rough start shows you have integrity and courage. you probably weren’t thinking—at least not clearly. Your approach could be as simple as. “Sorry about what I said earlier. science tells us that if your adrenaline kicked in. I clearly wasn’t in my right mind. and will put them back into action. breathe. Ever done that? Been in a conversation that was going south and later thought back to what you said? Did you ask yourself: “What was I thinking?” Well. they will on the inside. breathe! When you find yourself in a conflict situation. 119 Breathe! Researchers have found that. so you can stop yourself from saying or doing something you may regret later. In fact. go ask for a mulligan—a do-over. take several. stop talking and take a deep breath.
He’s just trying to be heard. be careful with Assignment this one. Give Yourself a Pep Talk Okay. He’s just trying to be understood.” Giving yourself a pep talk. deep breath you take. and have those down from a conflict situstatements ready in your mind to ation can help you regain remind yourself with every calming. along with resupplying oxygen to your brain through deep breathing. Epilogue The simple act of breathing will get oxygen flowing to your brain cells and help you regain control of your thinking. You don’t want to be caught having a Think of positive things to say one-way conversation out about each of your coworkers when loud.Quick Ideas 118 to 120 Breathing helps you get your fight-or-flight response back in check. 120 143 . Quietly tell yourself with each breath: “I can handle this. I can give him that. While you’re taking all those deep breaths recommended in Idea 119. focus. add in some self meditation. and returns control to the thinking part of your brain. helps you control yourself— and that’s the only person in a conflict you can control. so to speak. But talking yourself conflict should arise.
“It seems like we’re not seeing eye to eye on this. Simple acknowledgement of the mounting tension can be enough to break the ice with someone. Have the Difficult Conversations Beforehand One of the best ways Assignment to manage conflict is to Take stock of your current head it off. Is there one tense issues before they that seems tense. you could begin with.151 Quick Ideas to Improve Your People Skills Epilogue Self-talk helps you get control of yourself. as though it become full-blown concould be heading for a conflict? If flicts. take action today. Now. and I want to get some better clarity on your viewpoint. there is a time to let sleeping dogs lie. and build it within a new relationship. having difficult conversations—those conversations you avoid. 121 144 .” These words can strengthen any existing rapport. the only person you can control in a disagreement. you’re not looking to bring a situation to a boil. and get a productive dialog going. you need to step it up and have that uncomfortable conversation. and really want to just forget about. For example. You can do that by so. But when you’re getting signs that a relationship is souring or heading toward rough times. and resolve work relationships. Yes.
with just 122 145 . or tell others around the office. If you go to your boss. and broach the subject.Quick Ideas 120 to 122 The point is to make the effort. it’s your Assignment problem. How treats you? Has she did that make you feel? been dealing you dirt behind your back? So deal with it! Directly. You’ll find this to be a more powerful approach than you imagine. Don’t go to her boss. Don’t run around telling everyone else in the office. Do it in private. Not everyone else’s. preemptive conversation to clear the air. or moving toward a full-blown conflict. Epilogue When you’re getting signs that a relationship is turning sour. Make your first stop a conversation directly with the individual involved. take the first step. Don’t go to your boss. If you do that correctly. Got a problem Consider all the times someone with someone else? went to your boss about something Don’t like the way she you did or a decision you made. or the other’s boss. head it off with a calm. you’ll just make the situation worse and likely destroy any chance you had at fixing this problem yourself and coming out of it with a positive solution. look. Handle Conflict One-on-One Hey. Handling it one-on-one does not involve anything other than direct discussion about the problem. you’ll have a chance to re-create the relationship in a positive way.
If you get more than 30 percent of your 123 146 . Go into situations with reasonable expectations: Just because you’re being given a voice on an issue doesn’t mean your ideas will be accepted. detersometimes to the mine ahead of time. Having Your Say Doesn’t Mean Always Having Your Way Young professionals usually have Assignment the most difficult When you know you are going to time with this. A good way to put this into perspective. Epilogue Remember to deal with it yourself and look for the winwin solution. and seek a common solution—one that can benefit both of you. based on conserpoint of resenting vative expectations. what will be a their boss or more success for you when all is said and senior-level coworkdone. ers.151 Quick Ideas to Improve Your People Skills the two of you present. Approach the other with the problem objectively. have a chance to have your say. and avoid letting these incidents deflate your confidence. But more experienced professionals become somewhat immune in time to the fact that having your say does not mean having your way. is to think of a baseball batting average. Involving others simply complicates the issue with other relationships.
a quick but sincere apology is in order. The point is to have reasonable expectations that being heard doesn’t always translate into things going your way. The longer quickly. admit they are wrong win our respect far more often than people who just can’t bring themselves to do it. When that happens. In fact. you’re doing well. you should follow the age-old 124 147 . Learn to Eat Crow Another way to Assignment handle conflict effectively is to admit Do you have any overdue crow to when you’re wrong— eat? If so. you will head off disagreements and potential conflict based on your own personal view of reality—because your reality may not be the same as someone else’s. and remember that having your say with someone doesn’t mean you will get your way. And we do mean quick. get busy doing it. People who you wait. And those who have learned to master the art of listening and learning are likely to bat a 70-percent average. If you understand this. But there are times when you have to cast off your pride and face the fact that you just messed up with someone.Quick Ideas 122 to 124 ideas accepted during your tenure with a particular organization. the more damage you will do. Epilogue Have reasonable expectations of people and situations.
“Jack. a surprise. the most sincere intentions— not for the purposes of sucking up. an offer to make a run for mid-morning coffee or lunch. You will regain the injured party’s respect for you. The list could go on and on. 125 148 . make a What peace offerings can peace offering.” However. if you cut someone down in front of a group. I overreacted. That means you will have to apologize privately and at the next group gathering. cast off your pride and quickly apologize. Bring the Peace Pipe When you sense that tenAssignment sion might be brewing between you and someone else. I hope we can get let bygones be bygones on this one and move forward. such as a humorous article you know the other person will like. See why we recommend allowing others to save face (Idea 94)? Epilogue When you’re wrong. or strawberries from your garden. But note that you make today? Identify these should be given only with them and make them happen. mid-afternoon delivery of the person’s favorite brand of soda. So what are peace offerings? They are small gestures. I’m sorry about earlier today. Eating crow is as simple as telling the person. people can smell insincerity a mile away. Again.151 Quick Ideas to Improve Your People Skills marriage advice of making amends before the sun sets. Use the COB code for the office: Apologize before Close of Business. you’ll have to eat a bigger helping of crow.
Breaking bread is as simple as going to lunch together. either to build or with the other party. And remember: Breaking bread is not just for relationships on the fritz. it’s also a great way to connect with people and get to 149 . in some cases. remember to go with the flow. other than getting face time with the person. high-energy restaurant. When you do sit down to break bread. Epilogue When tension is building between you and a coworker. and plan lunch dates There is something with those individuals. and. it’s better not to have a mental agenda to accomplish at the meal. and use it weekly.Quick Ideas 124 to 126 The point is to commit random acts of kindness. or for after-hours tapas. Sales professionals know this technique very well. with the strict intention of being helpful and trustworthy. Nothing kills an opportunity to break bread productively like a noisy. offer a peace pipe. or to breakfast. 126 Break Bread When you’re sensing that a relationship Assignment could be better. In fact. Just pick a place that has a low-key atmosphere and is relatively quiet. Food relaxes people and creates an informal setting. daily. find a Identify two relationships you reason to break bread need to nurture. to maintain. about food that just brings people together and softens their defenses.
so there could be instances were the other person’s actions when you find yourself with no or behaviors? Think about how time to head off tangling antyou would have handled that lers with the office hothead. for situation if you had been that example. you have a responsibility to fight fair. Okay. To help smooth conflict with someone. What onds. fight fair! Some Think of times when people escalate conflict to consomeone did not treat you frontation in a matter of secfairly in a disagreement. 127 Fight Fair When conflict leads to conAssignment frontation. But even when that person.151 Quick Ideas to Improve Your People Skills know them. keep your comments neutral. we’ll break it down: Stick to the issue at hand. You must stay focused on the matter that needs to be resolved immediately—and that’s all! 150 . happens. it’s a great idea to plan individual bread-breaking “sessions” every month with a handful of people with whom you’re working to build rapport. argue the facts of the matter. That means no dragging up past transgressions of the other party. Fighting fair means you have to stick to the issue at hand. and keep it between you and the other person. Epilogue There is something about food that brings people together. and softens their defenses. In fact. break bread together.
and resist the temptation to talk about it with others in the office—or tattle to the boss. you always do this!” That’s not only unfair. Epilogue When conflict turns to confrontation. it’s also immature. who is forever warning 128 151 . if the situation allows. too. focus on the facts of the matter. remember to fight fair! Be Mindful of Your Thoughts. finally. Yes. keep the matter between you and the other person. You’ll also want to watch any general statements. And. which makes everyone uncomfortable and often does irreparable damage to the relationship. if the other party is a coworker. such as: “Kate. Ask to take the conversation “offline” for another time. and keep to the points that are in disagreement. Set your personal biases or dislikes about the person to the side. Fighting fair also means giving the other person the benefit of the doubt whenever you can. They Can Be a Path to the Dark Side Star Wars fans will find this statement familiar.Quick Ideas 126 to 128 Also. we pay tribute here to the wisdom of Master Yoda. Let that thought lead your comments when you find yourself in confrontation. Keeping it between the two of you is simple. Remember: Most people are working from good intentions. Some people will argue an issue openly in a meeting. discuss it calmly later.
152 . People can pick up on a vibe from you. and they know if you have less-than-favorable thoughts about them. which are all founded on what’s running through your head.151 Quick Ideas to Improve Your People Skills his Jedi pupils that Assignment their thoughts are the seeds of their acCheck your thoughts. and tone. based on fair thinking? Do they give others the benefit of the doubt? Well. So. you can counter those negative thoughts. gestures. Are they tions. let go. such as body language. until they give you clear and consistent reason not to. People can usually pick up on a negative vibe based on nonverbal communication. But when you give people the benefit of the doubt and believe they are working from their best intentions. Even Christian author and motivational speaker Joyce Meyers warns that negative thoughts can attract negative outcomes. the little green guy is right! What you put in your head becomes what you sow over time. If you walk around with negative and critical thoughts about your coworkers. Epilogue What you put in your head about people becomes how you interact with them in time. that will be the nature of your relationships. Assume the best of people.
we know—a lot easier Assignment said than done! But Think about a coworker who makes it’s a good policy to everything personal. Because you can only control you. “Joe. the second is the other person’s ego. such as. then you have to have some tough skin. so you need to accept it. The first is your ego. So what do you do when someone takes the conversation to a personal level? There are two options. There are that person seriously? two situations you need to consider when working to not take things personal. First. If you want to be taken seriously.Quick Ideas 128 to 129 Don’t Take Things Personal We know. depending on the situation. and some people are challenged daily. it’s about X project. let’s start there. We are all challenged and questioned from time to time. That’s why they try to divert the issue and make it about you. Does anyone take practice. and be able to know the difference between someone criticizing your ideas and finding fault with you personally. A more firm approach may be necessary though. calmly and politely tell the person the issue is not about you. People who make an issue personal and try to drag you down usually don’t have a good argument for their position to begin with. I’m happy to have a productive conversation 129 153 . But it isn’t. Challenging your ideas and decisions is par for the working world. Even your CEO gets pushback regularly. The second situation you may have to deal with is that of the office bully.
sort out the seeds of truth that can help you. seeking to understand. you have to work even harder to practice the Golden Rule. controlling your thoughts. I’ll gladly do so. If you want to calm down and talk about this later today. and he needs to respect them if he wants to work with you. This type of behavior shows your maturity—or immaturity—level.151 Quick Ideas to Improve Your People Skills with you about project X. and you help the other person garner sympathy from office colleagues. The breathing exercise (Idea 119) will help you keep your brain from 130 154 . and make a concentrated effort to show others respect.” This firm approach will let the person know you’ve set boundaries. That means you have to pull out all the tools. also be Think about a coworker who sure not to make things makes situations personal. you just drag yourself down. Don’t Make Things Personal While you’re working on not taking Assignment things personal. and let the rest of it just roll off your back. Do people personal. such as listening. breathing. In tense situations. Epilogue When you find yourself in the midst of a tense situation. and so on. truly respect him or her? When you make personal digs at others when conflict arises. Particularly control your breathing and your thoughts.
The old adage that recomRead on to Idea 141 mends that you listen twice as for a good technique on much as you talk is absolutely what to say during tense true in tense circumstances. And it’s important to realize that making things personal when conflict arises is exactly that: a reaction. That can get you into trouble. He Who Keeps His Mouth Shut. Keeps His Life Silence is golden—and never more so than when conflict Assignment arises. the office hothead. not to interpret this advice as a suggestion that you should be sullen or obstinate. Don’t let that happen. however. do permanent damage. 131 155 . or someone with whom you don’t have rapport yet. Don’t make situations personal. This is particularly true if the other party is the boss. In fact.Quick Ideas 129 to 131 going on autopilot. When it comes to talking during conflict. take the less-is-more approach. situations. Be careful. Epilogue Making personal digs at others when situations get tense only drags you down and makes you look bad. and will help you keep control of your thoughts and reactions. and will. Yet it’s a reaction that can. the more you talk in a conflict situation. the more you run the risk of saying something that could be career limiting— especially if the disagreement is with the boss.
and your mouth closed for the most part. The point is to keep your intentions focused on resolving the issue in a way that keeps the relationship from going south. is a better approach than pushing your case in the face of conflict. Keeping your ears and mind open. In short. but ume of our voices is the first articulately in tense discusthing to rise when we disagree sions? What effect does that with someone. the better. the conversation? Lower your voice at the first sign of tension. Dial Down the Volume Sometimes disagreements can be swiftly cooled if we just Assignment speak more softly. But usually. obnext to our adrenaline and serve people interacting. In the next few days. Epilogue When it comes to talking during conflict. Speaking more softly can send the message that you’re not a threat. and allowing your brain to get more oxygen— which helps you think and process information more clearly. reducing your adrenaline levels. take the less-ismore approach: The less said. person have on the outcome of Be cognizant of this. It can also calm your own internal system. 132 156 . a softer approach can be more effective in tense situations than sheer force. Who blood pressure levels.151 Quick Ideas to Improve Your People Skills too. the volin your office speaks softly.
Your words could be saying one thing. you send the message that you believe she is inferior to you. For example. you interact with someone with these are usually more whom you’re having tension. and damaging than the words ask him to give you feedback on you use—we forget that what that body language is saying. or putting your hands on your hips. Watch Your Body Language—It Speaks Volumes Not only do your words and voice speak volAssignment umes in a disagreement. 133 157 . but it’s on display for all to see. folding your arms across your chest. Some of the obvious offenses are rolling your eyes. In fact. Ask a trusted colleague to but so do your body lanwatch your body language when guage and tone. if you talk on top of someone. if you start to doodle on your notepad while someone is talking. more than 80 percent of our communication with others is nonverbal.Quick Ideas 131 to 133 Epilogue Fiery discussions can be cooled quickly if you just speak more softly. Be careful of the message you’re sending with your body language. cutting her off midsentence. you send the clear message that you don’t value what that person says. but your gestures speaking much louder. smirking. That’s right! The vast majority of what you say never comes out of your mouth. Or.
and smooth things over—as much as is appropriate. you may need to wait a day or so and apply one of the relationship tips mentioned previously. Did time to cool off. or breaking bread with someone. Give People Space When situations Assignment become heated. You should not extend yourself further than the situation warrants. you have a better outlook on things afterward? If you’re someone who needs closure on a tense situation. 134 158 . sometimes all people need is Think about a time when you a little space. but speaks volumes nonetheless. a little needed a few days to cool off. such as bringing the peace pipe. And she may just need a day or two to hide out and lick her wounds—and recover from her own embarrassment about how she also handled the situation. Yep. When people need space. Remember: If you’re feeling bad about the situation. Don’t let the silence bother you too much.151 Quick Ideas to Improve Your People Skills Epilogue More than 80 percent of what you say is through your body language and tone. it’s likely the other person is too. but. timing can make the difference between a mended fence and salt in the wound. if the person is still giving you the cold shoulder after a few days. you’ll need to muster up the courage to sit down with him one on one. that means the vast majority of what you say never comes out of your mouth.
you just want to make sure whatever you sow with others is going to come back to you in a good way.Quick Ideas 133 to 135 There is no shame in extending the olive branch. And it will affect their judgment about you. you will again throughout your life. someone in some way. making conflicts personal. and how cooperative they will be in working with you in the future. Give them space to do so.” If you mistreat These are lessons that never get old. Always Comes Under His Belly This is just a southAssignment ern way of saying “what goes around. You can call it karma or poetic justice. If you want what they say to be something of which you can be proud. What Goes Over the Devil’s Back. around. some people just need time to cool off. When we hurt others by not allowing them to save face. they will not forget it. reap what you sow. or because you will use them again and disrespect her. It only shows that you’re open to working things out. Epilogue When conflict arises. It’s very likely they will also tell others about your behavior. and maintaining a productive relationship. or tattling about them to the boss. encouragement. then sow seeds of collaboration. It really doesn’t matter. maligning them. comes Reread ideas 85 through 99. gossiping. 135 159 . cooperation.
Someone else’s viewpoint you have times we like to convince ourpreviously considered wrong. 136 160 . In fact. No viewpoint is without scrutiny. so make sure what you sow with others comes back to you in a good way. and beliefs. but that’s wrong in itself. honor. and a missed opportunity to find common ground. Epilogue What goes around. Challenge yourself this But there are never absolutes week to inquire about somewhen people disagree. These make up the soil in which good relationships are grown. and accept them for what they are—simply a perspective from which someone has traveled so far in life. it’s a missed opportunity to learn. and to understand. and consistency. There Is No Right or Wrong There are times when we like to think that we’re right Assignment and someone else is wrong. In short. or challenge.151 Quick Ideas to Improve Your People Skills fairness. selves there are. all viewpoints are exactly that: a view of where someone sits in terms of life experience and knowledge. So let go of any stubborn hold you have of right and wrong viewpoints. comes around. And we all come from different experiences. question. backgrounds. Holding to the conclusion that your viewpoint is right and someone else’s is wrong is a missed opportunity to expand your horizons and see an issue from a different angle.
trump the argument. the winner does not take all—but risks losing a lot in the long run. but you could leave a lot of carnage on the battlefield. concede We’ve all met them—the where you can concede. Wow! How useful! Winning at all costs is a short-term and shortsighted strategy. you might win in the heat of battle. Sure. then Assignment that cost could be very high for When you find yourself you in the long run. In short. you may have the right-of-way to make that left turn. And just as we learn when we drive in traffic. in disagreement. 137 161 . people who have to win the Give on points you can give debate.Quick Ideas 135 to 137 Epilogue Holding to the conclusion that your viewpoint is right and someone else’s is wrong is a missed opportunity to expand your horizons. but the guy speeding toward you may not care—and you end up with the honor of being dead right. greater good in for the sheer enjoyment of bemind. And work with the and put down all opposition long-term. Winner Never Takes All If your approach to conflict has been to win at all costs. ing right. and wounds that will never heal with some people. on.
your work. You sponses that will help you have to fight the urge to achieve that goal. 138 162 . It’s the only way to play the relationship game. Discard it and choose another that will help you win in the long run. If that action will undermine your goals. consider how you want the relayour own inclinations. Fight for the Relationship When you fight for the Assignment future of a working relaWhen preparing for a discustionship. Ask yourself this series of questions: What do you want that relationship to ultimately be? How can that relationship ultimately benefit you. fight your own demons. your own temptations to Then determine actions and rewin in the short term. or is living to fight another day a better strategy? Those who are effective in negotiation and dispute resolutions will tell you to consider your end goal with every action you take. When you experience difficult discussions with others. To achieve a long-term victory out of conflict. then it’s not a card you want to play. and your organization? Does a short-term win get you there. beat someone mentally into seeing things your way. and tionship to be in the long term. you have to reach for the win-win.151 Quick Ideas to Improve Your People Skills Epilogue Winning at all costs in a conflict is a shortsighted strategy. always keep the future in mind. you often have to sion that could result in conflict.
out later that you completely misunderstood the situation or his intentions? Ever let your biases or personal dislikes of someone make you think the worst of him—only to be proven wrong later? Then who had egg on her face? Did you feel that your credibility slipped a few notches? If so. And the best technique for gathering that information is to go to the person you’re about to scalp—but leave the tomahawk in your office and bring the peace pipe instead (Idea 125). So. make sure you have the facts. always keep the future of that relationship in mind.Quick Ideas 137 to 139 Epilogue When you experience difficult discussions with others. 163 . until you’re sure you have the facts. before you head off down the warpath after someone. make sure you have the facts. you can’t be sure a on someone only to find disagreement truly exists. Ever gone all reactive without them. 139 Get Clear Another quick and Assignment easy way to head off conMake it part of your personal flict is to make sure you policy not to act in a disagreement understand the situation. you were right to feel that way. And we’ve all done it. Epilogue Before you head off down the warpath after someone. The trick is to avoid it altogether.
Just lay out the facts. Though then you’ve made progress. Don’t Persuade One of the most comAssignment mon mistakes we make in a disagreement is to conPut this tip to the test.151 Quick Ideas to Improve Your People Skills Present. their emotions are guided by rationality and reason— for the most part. to do is lay out the facts or If the person meets you halfway. and see what result you get. Idenvince ourselves that we tify one issue about which you dismust persuade the other agree with someone. on the other hand. 140 164 . To avoid sending this message. When you lay out the facts of a situation—in a calm and collected manner—you appeal to people’s sense of reason. and then give people space—and time—to consider them. and people are first and foreyou’ve found a more effective way most emotional creatures. And you demonstrate that you respect their ability to assess the situation with good judgment. Persuading people. without stacking the deck in your But all we really need favor. and approach person to see our point of him on it. view. can come off as manipulative. circumstances. It can send the message that they’re not sensible enough to assess the facts. change your tactic. or that they’re not capable of making a good decision and have to be given passionate direction. to manage conflict. Simply lay out the circumstances or facts of why you have come to your position on a matter.
working to persuade self. sation. Are you doing all the him to see yours? How might betalking? Are you working ing ask-assertive have helped in hard to have your point of that situation? view heard? Are you attempting to persuade someone to your way of doing things? If you find yourself doing any of these. instead of trying to convince or persuade. telling the other person your Quickly check yourviewpoint. and immediately change your approach to being ask-assertive. you ask? Well. it’s an approach that allows you to still assert yourself. change your tactic and present facts to support your point of view. Ask. Many times that is all you can ask. Don’t Tell Okay. the tension growing. 141 165 . Epilogue When you don’t see eye-to-eye with someone. instead of telling the person what you think. What’s that. stop! Take a breath. You may not get agreement on every point. but it’s more than likely the person will at least meet you halfway.Quick Ideas 140 to 141 You will be pleasantly surprised by the results. say you find Assignment yourself in what appears to Recall a time when you saw a be a deteriorating conversituation go from bad to worse. and you can feel Were you doing a lot of talking. but in a way that’s more palatable to the other party because you ask questions.
Epilogue Being ask-assertive helps you to learn more about from where the person is coming. being ask-assertive helps you to learn more about from where the person is coming. That way you and the other person both walk away feeling as though you’ve gotten something you wanted. For example.151 Quick Ideas to Improve Your People Skills Your questions can be to clarify. “Am I understanding that we’re going to delay the project for another month?” Or. and gives her the chance to come to conclusions on her own. “But wouldn’t it be better for all parties involved if the project was delayed for only two weeks. you can still get your point across without persuading. when challenging. or to softly challenge. Look for Middle Ground The best way to resolve a Assignment disagreement is to find middle ground to meet the person Think of issues you’re halfway. dealing with right now. And you want to always strive for the win-win. or dominating. instead of a full month?” In short. and gives her the chance to come to conclusions on her own. Where is your point of Finding middle ground is middle ground in those the key to creating a win-win situations? in a disagreement. convincing. you could say. “Have we considered what could happen if we take that approach?” Or try something such as. When you become ask-assertive. 142 166 .
most people will back off of a hardline position when they see the other person is willing to be flexible. you gain insight into where you can meet him halfway—where you can establish middle ground. Why? Because starting from a point of agreement. Start From a Point of Commonality What do we have in common? That’s the first question you want to ask yourself when tension raises its ugly head in a working relationship. And there is always an opportunity for you to give enough to help the other person give a little. and what you both seem to agree on. or commonality. 167 . instead of focusing your thoughts on where you’re different. too.Quick Ideas 141 to 143 The best way to find middle ground is to ask questions of the person—to be ask-assertive. In fact. A simple question to kick off the conversation could be: “Tom. Then work from those points to improve your relationship. what would you see as success on this project?” From Tom’s answer. Epilogue Finding middle ground with someone helps you to create a win-win for all involved. 143 Assignment Bite the bullet and make a list of what you and your office nemesis have in common. helps you and the other person get focused on where you’re alike.
making it difficult to understand them. most people throughout the world are more alike than different. Some Nuts Are Worth Cracking No.151 Quick Ideas to Improve Your People Skills In fact. we’re not giving you Assignment permission to think of your office nemesis as a nut! Here Challenge yourself to we’re talking about hanging crack one office nut. So. culture. or ethnicity. 144 168 . Sometimes these people have hard exteriors. Are you both passionate about your company? Do you each have a strong work ethic? Are you both bright? Are these traits about yourself that you respect? Then couldn’t you also respect them about the other person. But we tend to focus on what’s different. We do this too often. with those difficult relationships. and use them as a launching pad for a better relationship? Epilogue What do we have in common? That’s the first question you should ask yourself when you experience a difficult relationship with someone. We write people off at the first sign of difficulty. not giving up at the first sign of trouble and throwing in the towel. and that’s where most of our difficulties with people start. think about what you and your nemesis have in common. regardless of geography. Or they are people who seem to dislike us for some unknown reason.
Quick Ideas 143 to 145 With these folks it could be just a matter of being patient. So. remember to take your experience with him one day at a time. which means being straight-up. and to keep working to get past those hard exteriors to the person he is on the inside. You might be surprised to find diamonds in the rough. having diligence. we have to put the moose on the table. Epilogue The office nut could really be a diamond in the rough. It could be quite a while—months or even years—before we start to see these people warming up to us and our ideas.” or just drawing a line in the sand. read ideas 85 through 143 again. or the office bully. But these can also end up being some of the best and most long-lasting relationships we ever develop. and getting to know the person one interaction at a time. getting the issue out in the open. Some people call this “kicking the elephant out of the room. the next time you encounter the office grump. 169 . direct. Have you learned that a colleague is bad-mouthing you around the office? Is someone spreading gossip and rumors about you? 145 Assignment If you find you need to put the moose on the table with someone. Put the “Moose on the Table” Sometimes we have to do this when we have difficult relationships. but taking a stand and airing out an issue is. Whatever idiom you choose is not important.
146 170 . then you need to make your stand and set boundaries. By merely making her aware you have her number. she will think twice in the future. In fact. if you try to fight every battle that comes your way on the relationship front. Some people forget that their companies can be liable for employees’ statements that lead to defamation of character and slander. Epilogue You are within your rights to put the moose on the table when someone crosses the line and defames your character or reputation. gossip. and friendless. what how do you begin a conversation such as that? Simple. or outright lies about you. that should be done one-on-one. there are some disturbing rumors that have reached me. You are within your rights to ask someone to cool it when he’s stepped over the line. and always with professional courtesy. I hope this is just a misunderstanding. you will find times when you will need to approach these folks and put the moose on the table. But if someone is spreading damaging rumors. And you may need to take this one up with the boss. Of course. Can you tell me if you said these things about me?” It really doesn’t matter how the person answers. but only after you’ve calmly and politely addressed the situation one-on-one with the instigator. and I’ve been told you are at the root of them.151 Quick Ideas to Improve Your People Skills Though you want to use the techniques in this book to help you. So. with something similar to this: “Jane. Pick Your Battles You’ll be pretty worn out.
if someone has asked you to keep something confidential. you will find that they will give the same to you. If so. let it roll off your back.Quick Ideas 145 to 147 people who try to fight every Assignment battle are often seen as reactive Ask yourself if you let and extremist. and the wise know that we have to let bygones be bygones many times in our relationships with people. then But the wise among us you might be taking on too know that people make mismany battles. If you take a live-and-let-live approach to dealing with people. Let the petty go. For example. Epilogue Live and let live whenever you can. from time to time. Not only will this help you stay balanced. Mend Fences If you make mistakes in dealing with people. but it will also make people more apt to forgive and forget quickly when you mess up yourself—and you will. and save your energy for the bigger things that can bring real meaning to your life. keep you drained. and are rarely petty relationship issues taken seriously. be quick to acknowledge and correct them. and you let the cat out of the 147 171 . People will pay you the same margin of forgiveness along the way. takes. they have general human failings.
eating crow. Have you made minute set on that type of any mistakes you need to consituation. be authentic. bring the peace pipe (Idea 125). immediately take action to correct the situation—an apology is a good place to start. Forgive Yourself for Failings So. Have you taken the steps outlined in this book. take care of it at the first opportunity—which generally means finding an opportunity within the next 24 hours. If the mistake is just a simple mishap or unintentional error. you messed up. Do some relationship Do not let another housecleaning. and breaking bread? 148 172 .151 Quick Ideas to Improve Your People Skills bag. Mistakes that erode fess and correct? If so. and be serious about the situation—especially if your actions can affect the future of your relationship with the other person. corrected immediately. offering a peace pipe. get busy people’s trust in you must be today mending those fences. make a beeline to that Assignment person’s office to apologize. And do not be glib about it with a cavalier comment such as “my bad. such as mending fences. Epilogue If you make mistakes that can erode people’s trust in you. and dive right in to an apology. Track down the person.” Be sincere.
write a phrase to describe times we have to just the situation on a piece of pagive people space to get per. tried to restore the trust in the relationship. SomeIf so. then find honor in that. and changed your behavior. then you’ve done your part. Beating yourself up over and over about a situation only continues to keep the situation alive. Are you beating your life. but in the other person’s mind as well. 173 . ment forward. If you’ve apologized. We all mess up with each other from time to time. You’ve adequately taken responsibility. and forgive yourself. too. Epilogue If you’ve done all you can to take responsibility for a failing regarding a relationship. then run it through the over something—and office shredder. and let go of it we have to forgive ourin your mind from this moselves. When you mess up. And it puts you at risk of making the same mistake again. You’ve done yourself up about something all you can to correct you tried sincerely to correct? the situation. then all you Assignment can do is get on with Take stock. not only in your mind. and that’s something in which you can find honor.Quick Ideas 147 to 148 If so. then forgive yourself and move on. do the right thing.
and accept the fact that these folks have some growing to do. The latter is much easier to forgive. self if that energy and effort is moving you forward or holding you And others will fail in the past. if we know someone did not mean to hurt us. ask yourwhen they fail you. but when that’s said and done. With people who fail us intentionally. But for the sake of your own integrity and sanity. And sometimes that’s a hard road to take. As long as you don’t let their shortcomings become your own. 174 . you—both intentionally and unintentionally. it’s an important journey for you. you also Are you holding a grudge have to forgive others against someone? If so. you have to stand up and be the bigger person. then you have to get over it and get on with your life. But what about those who fully intended to do us harm? You may need to put the moose on the table (Idea 145) with these folks. Forgiving others requires you to travel the high road.151 Quick Ideas to Improve Your People Skills Forgive Others as Well Just as you have to Assignment forgive yourself for your own failings. 149 Epilogue Forgiving others requires you to journey down the high road. you can use the situation as a learning experience and a growth opportunity for your own maturity.
Be the first to make the overture anyway. So. find a way either directly or indirectly—depending on how bad the dust-up was—to be the first to mend fences. the George Castanzas of the world will tell you this is a mistake.Quick Ideas 149 to 150 Be the First to Offer the Olive Branch—or the Peace Pipe Ever felt as though Assignment you were in a stalemate with someone with Do you have an olive branch whom you’ve had a that needs delivering in person? If dust-up? It’s very likely so. Yeah. 150 Epilogue Be the first to make an overture of peace when relationships hit a rough spot. yeah. In short. Your efforts will very likely be rewarded. Sometimes you don’t even need to say anything. she would like to put the situation behind her too. In most cases. dig out the olive branch—or a peace pipe—and start that long walk down the company corridors to deliver it. or reject his overtures of apology. But we all know George was not known for his people skills. people avoid each other after a wrangling because they are both afraid the other person will either do him more injury. Getting things back on track with someone might be as simple as dropping off a can of their favorite soda—without any pomp or circumstance. that you have “hand” and you should keep it. make it happen this week. 175 .
but they also teach us more about ourselves—such as what pushes our buttons and what boundaries we will not go beyond. they are the sandpaper that hones and polishes us. In essence. And difficult relationships—difficult people in particular— give us practice in putting our people skills to the test. they difficult personality to deal with. And sometimes the powers that be put these people in our way to teach us lessons and help us grow to new levels of strength and maturity—and forgiveness. 151 176 . and managing ourselves in the process. It’s been said that it’s easy to be an angel when no one ruffles your feathers. are an essential part of view it as a test of your people our growth as people.151 Quick Ideas to Improve Your People Skills Every Difficult Relationship Has Lessons As much as diffiAssignment cult relationships can give us heartburn and The next time you run into a sleepless nights. But difficult relationships are the litmus test for how evolved we really are in dealing with people. Accept the challenge and Difficult relationpersevere! ships not only teach us a lot about others. skills.
177 .Quick Idea 151 Epilogue Difficult relationships are like sandpaper. They hone and polish us.
This page intentionally left blank .
Index Index A Actions. principles and. 38-40 Breaking bread. 124-125 Battles. 100-101 Authenticity. 123-124 Body language. 173 Asian cultures. 113 Ask-assertiveness. emotional. 88-89 Believing in others. 64-65 Attention. 32 Anniversaries. 142-143 Aggression. 171-172 Being dismissive. 142-143. Dr. Tony. 123-124 Apologizing. remembering. 70 Assumptions. 36 Adrenaline. 149-150 Breathing. social. 47-48 Awareness. 147-148. remembering. 104-105 Assertiveness. 70. 61-62 Being positive. 165-166 Asking questions. 157-158 Boundaries. 21 B Bank account. undivided. 87. choosing your. 58-59 Being present. 87 Alessandra. 94. 71-72 Birthdays. 101-102 Being reactive. 154-155 179 .
151 Quick Ideas to Improve Your People Skills
Bribery, 63-64 Bully, office, 155 Burnout, people, 124-125 Courage, 142 Covey, Stephen, 36 Creator personality style, 76, 79, 80 Criticism, 59-60 Crow, eating, 147-148 Cynics, 66-67
C
Caring, 18-19 Carlin, George, 42 Carnegie, Dale, 17, 94 Celebrations, 122-123 Character, 46-47 Character, defamation of, 170 Cheerleading others, 118-119 Choosing your battles, 171-172 Commonality, point of, 167-168 Communication, face-to-face, 91-92 Communication, nonverbal, 82 Compassion, 110-111 Compassionate honesty, 110 Complaint, 59-60 Condescension, 62-63 Conflict, 136-142, 144-146, 150-151, 155-156, 163-164 Confrontation, 141 Connotation, 55-56 Consistency, 59 Contempt, 60
D
Defamation of character, 170 Defensiveness, 60 Denotation, 55-56 Difficult relationships, 176-177 Directness, 167-170 Director personality style, 76, 79, 80 Disagreements, 140 Discernment, 52-53 Dismissive, being, 61-62 Dominance, 87 Domineering, 68, 87-88 Doolittle, Eliza, 28 Doubt, benefit of the, 30-31
E
Eastern cultures, 113-114 Eating crow, 147-148
180
Index
Edifying, 109-110, 129-130 E-mail, 90, 92, 93-94 Emerson, Ralph Waldo, 96 Emotion, 94-95 Emotional bank account, 124-125 Emotional equity, 95-96 Emotional intelligence (EQ), 17-18, 97-98 Emotionally charged words, 55-56 Encouragement, 72, 114-116 Encouraging quotations, 116 Encouraging words, 116 Events, social, 135-136 Expectations, 34, 35, 138-139, 146-147 Eye contact, 99-100 Forgiveness, 172-174 Formal networks, 21 Four Horsemen, the, 59-60 Friends, 96-97
G
Generations, 46 Genuine, 40-41 Goals, helping others achieve their, 119-120 Golden Rule, the, 15, 25, 30, 31, 32, 54, 78, 154 Goldman, Daniel, 17 Gossip, 60-61 Growth, 71
H
Handling conflict correctly, 136-142 Helping others achieve their goals, 119-120 Helping others be heard, 111-112 Helping others be understood, 112-113 Honesty, 110-111, 127-128 Humility, 43 Humor, sense of, 42-43
F
Face-to-face communication, 91-92 Failings, 172-173 Fear, 67, 89 Fences, mending, 171-172, 175 Fighting fair, 150-151 Fight-or-flight syndrome, 143 Flaws, 47 Food, 149-150
181
151 Quick Ideas to Improve Your People Skills
I
Inferiority, 82-83, 84 Informal networks, 21-22 Ingratiating, 27-28 Inspiration, 128-129 Integrity, 48-49, 142 Intelligence, emotional, 17-18, 97-98 Intelligence, social, 15, 19-21, 34 Intentions, 29-30 Interaction, live, 90-92 Intimidation, 84, 89-90
Live interaction, 90-92 Lunch hour, 134-135
M
Manipulation, 63-64 Manning, Peyton, 121 Mending fences, 171-172, 175 Meyers, Joyce, 152 Middle ground, 166-167 Mistakes, 44-45 Motives, 29-30 Myers-Briggs scale, 125-126
J
Judgment, 30, 106-107, 111
N
Names, remembering, 98-99 Need to win, 161-162 Needs, your own, 125-126 Negativity, 65, 152 Networks, 21-22 Nonverbal communication, 82
K
Karma, 159-160 Keeping your word, 50-51 Kind words, 58-59
L
Lambert, Dr. Howard, 59-60 Laughing at yourself, 43-44 Listening, 82, 86, 88, 100, 102-103, 115, 137, 155-156
O
Office bully, 155 Open-mindedness, 108 Opinions, 69, 76, 80, 105-106 Opinions, style vs., 76, 80
182
101-102 Q Questions. 44-45 Others. 133-134 Reason. believing in. 153-154 Perspective. encouraging. 67 Oxygen. 72 Remembering birthdays and anniversaries. the. 45-46 Relationships. 94 Reasonable expectations. 164-165 Pessimism. 104-105 Quotations. 123-124 183 . being. 25. being. 154-155 Personal opinions. 35 Reactive. taking things. difficult. 49 Promises. 20 Persistence. making things. 58-59 Present. 74-80 Personally.Index Opportunities. 164-165 Principle-Centered Leadership. 32. 176-177 Relationships. 160-161 Relationship skill set. 148-149 Peaceful relationships. 50-51 Pygmalian Effect. 36-38. 28-29 P Patronization. 105-106 Personality styles. 88-89 Reading. being. 79 Positive. 131-132 People burnout. 62 Peace offerings. 116 R Rapport. 72 Peacemakers. the. peaceful. 142-143 Presenting. 115-116 Persuasion. 36 Principles. 20. 65-66 Platinum Rule. 16. 22-26. 71-72 Overreacting. 124-125 People skills. 78. 15-16. 176-177 Relationships. asking. 168-169 Personal. 73-74 People break. mistakes as. 72. 87. 33. 146-147 Recognizing all viewpoints.
26-27 Shaw. 53-54. 120-121 Star Wars.” 134-135 Tolerance. 19-21. 79. 158 Skills. 170 Smiling. 83. 42-43 Seriousness. 89 Self-deprecation. 19-21. 85 Technical skills. 135-136 Social goofs. 97-98 Technology. 92 “That’s Wild. 155-156. 21 Social events. 152 Stonewalling. 34 Solutions. George Bernard. 28 Shyness. giving. 90. 89 Self-esteem.151 Quick Ideas to Improve Your People Skills Remembering names. recognizing all. 43-44 Self-doubt. 113-114 Self-confidence. personality. 160 Styles. 130-131 Roosevelt. 160-161 Voice mail. 19-21. 49-50 U Undivided attention. technical. 80 S Saving face. 100-101 V Viewpoints. 85 Sense of humor. 90. 83 Spotlight. 82 Social awareness. 156-157 184 . 157-158 Trust. 76. 90-91 Text messages. 15. 98-99 Reputation. 126-127 Supporter personality style. 97-98 Slander. 74-80 Support group. 113 Role models. 25-26. Eleanor. 41-42 Service. 54-55 Tone. 44 Social intelligence. 80-82 Silence. 60 Stubbornness. 92 Voice volume. 25-26. 34. 46-47 Respect. 158-159 T Talkativeness. 33 Space.
50-51 Words. John. emotionally charged. Robin. 55-56 Words. encouraging. 42 Win. need to. 161-162 Wooden. 152 185 . 116 Words. keeping your. 58-59 Y Yoda. 114 Williams.Index W Western culture. kind. 130-131 Word.
This page intentionally left blank .
187 . He is currently a faculty member with the Indiana University School of Journalism in Indianapolis. and for a major NATO organization with responsibilities for public information worldwide. is the director of graduate studies. marketing. APR Bob Dittmer has more than 35 years of experience in the public relations. having more than 19 years of experience as an adjunct faculty member with colleges and universities around the country. in both undergraduate and graduate programs. and also serves as the marketing and retention officer. Bob is also the author of 151 Quick Ideas to Manage Your Time and coauthor of 151 Quick Ideas for Delegating and Decision Making (with Stephanie McFarland). and higher education. Bob has also served as the director of media relations for both an American government organization with responsibilities for all of Europe. He currently teaches public relations courses.Index About the Authors Robert E. working with a variety of clients in both the business-to-business and business-to-consumer areas. Dittmer. He has more than 25 years of experience in public relations and advertising agencies.
Stephanie McFarland began her management career by supervising employees for her family’s business. Stephanie has managed projects. as well as others.151 Quick Ideas to Improve Your People Skills Stephanie McFarland. APR More than 20 years ago. teams. 188 . where she teaches public relations management courses to undergraduate and graduate students. and departments in multinational. She has provided public relations counseling to more than 20 clients and employers in the electrical and pharmaceutical industries. Stephanie is an adjunct professor for the Indiana University School of Journalism in Indianapolis. Her personal philosophy of management has evolved throughout the years from merely motivating employees “to get the job done” to discovering what makes them tick as well as ways to further develop their current roles. and nonprofit organizations within the past 19 years. government. consultancy. Fortune 500.
This page intentionally left blank .
Ph. Wilson ◆ 978-1-56414-830-8 151 Quick Ideas to Inspire Your Staff Jerry R.151 Quick Ideas to Improve Your People Skills 151 Quick Ideas to Get New Customers Jerry R. Dittmer and Stephanie McFarland ◆ 978-1-56414-961-9 151 Quick Ideas to Deal With Difficult People Carrie Mason-Draffen ◆ 978-1-56414-938-1 151 Quick Ideas for Advertising on a Shoestring Jean Joachim ◆ 978-1-56414-982-4 192 .D. Wilson ◆ 978-1-56414-829-2 151 Quick Ideas to Manage Your Time Robert E. Dittmer ◆ 978-1-56414-899-5 151 Quick Ideas to Recognize and Reward Employees Ken Lloyd. ◆ 978-1-56414-945-9 151 Quick Ideas for Delegating and Decision Making Robert E. | https://www.scribd.com/doc/89530096/151-Quick-Ideas-to-Improve-Your-People-Skills | CC-MAIN-2017-47 | refinedweb | 38,485 | 78.25 |
Introduction
Computing can be a surprisingly deep field at times. I find that the more I learn about it, the more I'm struck by quite how many similarities there are between different areas of the subject. I was browsing through Andrei Alexandrescu's fascinating book Modern C++ Design recently when I read about a connection which I thought was worth sharing.
As I suspect most of you will already be aware, C++ can be used for something called template metaprogramming, which makes use of C++'s template mechanism to compute things at compile time. If you take a look at a template metaprogram, however, you'll find that it looks nothing like a 'normal' program. In fact, anything but the simplest metaprogram can start to look quite intimidating to anyone who's unfamiliar with the idioms involved. This makes metaprogramming seem hard, and can put people off before they've even started.
Surprisingly, the key to template metaprogramming turns out to be functional programming. Normal programs are written in an imperative style: the programmer tells the computer to do things in a certain order, and it goes away and executes them. Functional programming, by contrast, involves expanding definitions of functions until the end result can be easily computed.
Programmers who have studied computer science formally at university are likely to have already come across some form of functional programming, perhaps in a language such as Haskell, but for many self-taught programmers the idioms of functional programming will be quite new. In this article, I hope to give a glimpse of how functional programming works, and the way it links directly to metaprogramming in C++.
For a more detailed look at functional programming, readers may wish to take a look at [Thompson] and [Bird]. Anyone who's interested in template metaprogramming in general may also wish to take a look at the Boost MPL library [Boost]. Finally, for a much deeper look at doing functional programming in C++, readers can take a look at [McNamara].
Compile-time lists
As a concrete example, I want to consider a simple list implementation. For those who are unfamiliar with them, Haskell lists are constructed recursively. A list is defined to be either the empty list, [], or an element (of the appropriate type) prefixed, using the : operator, to an existing list. The example [23,9,84] = 23:[9,84] = 23:9:[84] = 23:9:84:[] shows how they work more clearly. Working only with lists of integers (Integers in Haskell) for clarity at the moment, we can define the following functions to take the head and tail of a list:
head :: [Integer] -> Integer head (x:xs) = x tail :: [Integer] -> [Integer] tail (x:xs) = xs
The head function takes a list of integers and returns an integer (namely, the first element in the list). The tail function returns the list of integers remaining after the head is removed. So far, so mundane (at least if you're a regular functional programmer).
Now for the interesting bit. It turns out that you can do exactly the same thing in C++, using templates. (This may or may not make you think 'Aha!', depending on your temperament.) The idea (à la Alexandrescu) is to store the list as a type. We declare lists of integers as follows:
struct NullType; template <int x, typename xs> struct IntList;
The NullType struct represents the empty list, []; the IntList template represents non-empty lists. Using this scheme, our list [23,9,84] from above would be represented as the type IntList<23, IntList<9, IntList<84, NullType> > >. A key point here is that neither of these structs will ever be instantiated (that's why they're just declared rather than needing to be defined): lists are represented as types here rather than objects.
Given the above declarations, then, we can implement our head and tail functions as shown in Listing 1.
Already some important ideas are emerging here. For a start, if we ignore the fact that the C++ version of the code is far more verbose than its Haskell counterpart (largely because we're using C++ templates for a purpose for which they were never designed), the two programs are remarkably similar. We're using partial template specialization in C++ to do the job done by pattern-matching in Haskell. Integers are being defined using enums and lists are defined using typedefs (remember once again that lists are represented as types).
Using these constructs is rather clumsy. A program outputting the head of the list [7,8], for example, currently looks like:
#include <iostream> int main() { std::cout << Head<IntList<7,IntList<8, NullType> > >::value << std::endl; return 0; }
To improve this sorry state of affairs, we'll use macros (this is one of those times when the benefits of using them outweigh the disadvantages). In a manner analogous to that used for 'typelists' in Modern C++ Design, we define the macros in Listing 2 to help with list creation.
We also define macros for head and tail:
#define HEAD(xs) Head<xs>::value #define TAIL(xs) Tail<xs>::result
The improvement in the readability and brevity of the code above is striking:
std::cout << HEAD(INTLIST2(7,8)) << std::endl;
From now on, we will assume that when we define a new construct, we will also define an accompanying macro to make it easier to use.
Outputting a list
Before implementing some more interesting list algorithms, it's worth briefly mentioning how to output a list. It should come as no surprise that the form of our output template differs from the other code in this article: output is clearly done at runtime, whereas all our other list manipulations are done at compile-time. We can output lists using the code in Listing 3.
Sorting
Computing the head and tail of a list constructed in a head:tail form may seem a relatively trivial example. Our next step is to try implementing something a bit more interesting: sorting. Perhaps surprisingly, this isn't actually that difficult. The analogy between functional programming in Haskell and compile-time programming in C++ is extremely deep, to the extent that you can transform Haskell code to C++ template code almost mechanically. For this article, we'll consider two implementations of sorting, selection sort and insertion sort (it would be just as possible, and not a great deal harder, to implement something more efficient, like quicksort: I'll leave that as an exercise for the reader). I've confined my implementation to ordering elements using operator<, but it can be made more generic with very little additional effort.
A simple selection sort works by finding the minimum element in a list, moving it to the head of the list and recursing on the remainder. We're thus going to need the following: a way of finding the minimum element in a list, a way of removing the first matching element from a list and a sorting implementation to combine the two. Listing 4 shows how we'd do it in Haskell.
We can transform this to C++ as shown in Listing 5.
The important things to note here are that each function in the Haskell code corresponds to a C++ template declaration, and each pattern-matched case in the Haskell code corresponds to a specialization of one of the C++ templates.
Implementing insertion sort is quite interesting. The essence of the algorithm is to insert the elements one at a time into an ordered list, preserving the sorted nature of the list as an invariant.
A simple Haskell implementation of this goes as follows:
insert :: Int -> [Int] -> [Int] insert n [] = [n] insert n (x:xs) = if n < x then n:x:xs else x:(insert n xs) isort :: [Int] -> [Int] isort [] = [] isort (x:xs) = insert x (isort xs)
Translating the insert function to C++ is not entirely trivial. The problem is that we need to generate one of two different types depending on the value of a boolean condition, which is non-obvious. There are (at least) two solutions to this: we can either rewrite the Haskell function to avoid the situation, or we can write a special C++ template to select one of two typedefs based on a boolean condition.
Rewriting the Haskell code could be done as follows:
insert :: Int -> [Int] -> [Int] insert n [] = [n] insert n (x:xs) = smaller : (insert larger xs) where (smaller,larger) = if n < x then (n,x) else (x,n)
This solves the problem (generating one of two different values depending on the value of a boolean condition is easy), but at the cost of a less efficient function.
The template version (using the Select template borrowed directly from Andrei's book) does a better job:
template <bool b, typename T, typename U> struct Select { typedef T result; }; template <typename T, typename U> struct Select<false, T, U> { typedef U result; };
This allows us to straightforwardly transform the more efficient form of the Haskell code to C++ (Listing 6).
It turns out that in C++ this still isn't as efficient as it could be. The culprit is in the second specialization of Insert - by defining the before and aftertypedefs in the specialization itself, we force them both to be instantiated even though only one is actually needed. The solution is to introduce an extra level of indirection (Listing 7).
This solves the problem, because now the chosen IntList template only gets instantiated if it is actually needed.
Maps and filters
One of the best things about writing in a functional language has traditionally been the ability to express complicated manipulations in a simple fashion. For example, to apply the same function f to every element of a list xs in Haskell is as simple as writing map f xs. Similarly, filtering the list for only those elements satisfying a boolean predicate p would simply be filter p xs. A definition of these functions in Haskell is straightforward enough:
map :: (a -> b) -> [a] -> [b] map f [] = [] map f (x:xs) = (f x) : map f xs filter :: (a -> Bool) -> [a] -> [a] filter p [] = [] filter p (x:xs) = if p x then x : remainder else remainder where remainder = filter p xs
Achieving the same thing in C++ initially seems simple, but is actually slightly subtle. The trouble is in how to define f and p. It turns out that what we need here are template template parameters. Both f and p are template types which yield a different result for each value of their template argument. For instance, a 'function' to multiply by two could be defined as:
template <int n> struct TimesTwo { enum { value = n*2 }; };
and a predicate which only accepts even numbers could be defined as
template <int n> struct EvenPred { enum { value = (n % 2 == 0) ? 1 : 0 }; };
The Map and Filter templates can then be defined as in Listing 8.
Note that we again make use of the Select template to choose between the two different result types in Filter.
Extensions
So far, we've only seen how to implement integer lists. There's a good reason for this - things like doubles, for example, can't be template parameters. All isn't entirely lost, however. It turns out that we can make lists of anything that can be represented by integers at compile-time! The code looks something like Listing 9.
The important change is in how we treat the head of the list - now we write typename x wherever we had int x before, and use the type's value field to get its actual value if we need it. The rest of the code can be transformed to work for generic lists in a very similar fashion. There's something to be said about how we handle ordering, but that's a topic for the next article!
Conclusion
In this article, we've seen how template metaprogramming is intrinsically related to functional programming in languages like Haskell, and implemented compile-time lists using C++ templates. Next time, I'll show one way of implementing ordering in generic lists, and consider how to implement compile-time binary search trees.
So what are the uses of writing code like this? One direct use of compile-time BSTs would be to implement a static table that is sorted at compile time. This can prove extremely helpful, particularly in embedded code. There are also indirect benefits derived from learning more about template metaprogramming in general. Writing code like this can be seen as a useful stepping stone towards understanding things like the typelists described in Andrei's book. The capabilities these provide are quite astounding and can provide us with real benefits to the brevity and structure of our code.
Till next time...
Acknowledgements
Thanks to the Overload review team for the various improvements they suggested for this article.
References
[Bird] Introduction to Functional Programming, Richard Bird and Philip Wadler, Prentice Hall
[Boost]
[McNamara] Functional Programming in C++, Brian McNamara and Yannis Smaragdakis, ICFP '00
[Thompson] Haskell: The Craft of Functional Programming, Simon Thompson, Addison Wesley | https://accu.org/index.php/journals/1422 | CC-MAIN-2020-29 | refinedweb | 2,179 | 57.1 |
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi There,
Getting 500 jira.exceptions.JIRAError: JiraError HTTP 500 URL
def _get_jira_instance(self):
props = self.parser
oauth_dict = {'access_token_secret': None,
'access_token': self.parse_prop('jira_accesstoken', props),
'consumer_key': self.parse_prop('jira_consumerkey', props),
'key_cert': self.read_jira_ppk()
}
server = self.parse_prop('jira_server', props)
print(oauth_dict)
options = dict(server=server)
print(options)
return JIRA(options, oauth=oauth_dict)
All the values like access_token etc are coming from a file.
Any one please tell me what could be the problem in the above code?
Thanks,
Laxmi
You can do the debugging by getting the response from the request.
At the moment I can't find any reason why it should be a 500 error, it just means there is something wrong in the server when you requested it.
Hi Bryan,
Thank you for the quick response.
When I see the error response I don't find any error message like access token missing or token has expired something like that, but i unable to get the problem
Thanks again.
Hi @Laxminarsaiah Ragi ,
What python library you are using?
I would like to try it myself and also are you using Jira Cloud or Server?
I would suggest to not use any library yet.
Please follow the guide in accessing the API in this link.
I just tested it and it. | https://community.atlassian.com/t5/Jira-questions/Got-500-error-while-initializing-Jira-object-using-python/qaq-p/1254700 | CC-MAIN-2020-34 | refinedweb | 227 | 68.16 |
/Library/Frameworks/Python.framework/Versions/3.4/bin/python3.4 /Users/Simon/Downloads/trapped_knight_python.py
Traceback (most recent call last):
File "/Users/Simon/Downloads/trapped_knight_python.py", line 8, in <module>
import numpy as np
ImportError: No module named 'numpy'
Process finished with exit code 1Preferences/ Project/
This the error I see. When I pop over to Project Interpreter and choose + to add numpy I get
DEPRECATION: --no-install, --no-download, --build, and --no-clean are deprecated. See.
Downloading/unpacking numpy
Cannot fetch index base URL
Could not find any downloads that satisfy the requirement numpy
Cleaning up...
No distributions at all found for numpy
Storing debug log for failure in /Users/Simon/.pip/pip.log
Anyone any clues as to what is wrong?
Hi Simon,
I suspect you will have the same problem if you try to install it from system terminal (outside of PyCharm) for the same interpreter?
Looks like a problem with pip or interpreter. You could try since it really looks like you use some old version of pip. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/360002862620-Struggling-to-get-numpy-installed | CC-MAIN-2019-30 | refinedweb | 172 | 54.93 |
User talk:Rtfm
Redirect to:
Democracy
In Summary, there are 4 people which believe they are the "wiki police" and seem to think they ate the wisdom from a spoon :
- Nakaner
- Polarbear w
- Mateusz Konieczny
- Adamant1 - this guy is "worst case", I assume his motivation is that of an
Internet_troll. Another motivation for this way of sabotage might also be some commercial interest to hold things up. The 3rd option gets more and more unlikely, but I didn't exclude it totally, yet.
Instead of "nit-picking" they could discuss their point of view (how the wiki should look like) with the rest of the community and summarize the outcome in a page which describes it. In case someone (in good faith) makes changes in the wiki against (harmonized) rules, it would be easy to send a link to this instead of writing personal opinions.
I consider it extremely problematic and not helpful to destroy overviews such as the one in guest_house which IMHO enables newbies (and owners of a guesthouse) to get an overview without crawling around in half of the wiki. Especially if there's no real alternative to it. There are thousand of active OSM contributors out there which would appreciate a better overview, but all the "self-defined sheriffs" do seems to be destructive. Also see the article a former enthusiastic OSM member wrote : "Why OpenStreetMap is in Serious Trouble".
- 1. I really don't appreciate being lumped in with the other users on your supposed "wiki police" list. In know way am I ideologically aligned with them in any way whatsoever, about anything. In fact, I have gotten in more then a few arguments with all three of them on multiple occasions and been unfairly attacked by them more then once. So, treating us as a group just because we all happen to have similar criticisms of your tagging scheme is massively disingenuous. We all have a different approach to this. I don't even agree with them on why your tags are problematic. Mostly I think they just need be clarified better and refined before being "foisted" on everyone else, and I've said so repeatedly. Ultimately, there's zero wrong with me or anyone else requesting that you clarify things and go through the proper processes. It's extremely rude and massively dishonest to claim us doing so is "nitpicking."
- 2. Using personal attacks or insults as a way to call people out are never a good way to resolve things. Know where have I or any of the other people on your list done that you. Doing so makes you actions seem even less justifiable. As the saying goes, "“If the facts are against you, argue the law. If the law is against you, argue the facts. If the law and the facts are against you, pound the table and yell like hell." All you have been doing is pounding the table and this is just more of it. Personal attacks are also against the rules.
- 3. If your in the right and we are just nitpicking as you claim, it should be pretty easy for you to make a clear case for why. I have to see you do that though. The only conclusion I can draw from your endless fist pounding and none sense is that your in the wrong and are just unwilling to admit it.
- 4. It's pretty ironic your going off about how we should discuss things with the community while you refuse to. You haven't even addressed the concerns any of us on your list have brought up. Let alone have you talked about any of it on the mailing list or anywhere else. All you have done is repeatedly said "do it this way." obfuscated, deflected, and made excuses when someone criticizes what your doing.
- 5. It's also pretty ironic that your citing a signal users blog post which is full unfounded claims that could easily be called "nitpicking" and don't represent the views of the community while faulting us for doing the same thing. On the one hand your fine personally making unilateral tagging decisions and quoting fringe, none representative, blog posts but then supposedly we are in the wrong when we do things. Even when it has been discussed by the community. You can't have it both ways. If your able to plaster your tagging suggestions all over the wiki without anyone agreeing to it then we are just as able to remove it without a long drawn out "community" process being involved. Otherwise, lead by example. Discuss your tags before suggesting them and respond to people's criticisms. Otherwise, you have no room to talk or leeway to point the finger at us. --Adamant1 (talk) 06:27, 30 November 2019 (UTC)
- O.k., I see this should be differentiated : You're another kind of "wiki police" than the others. More the "social education worker" with a disposedness to lengthy accusations without examples what you're talking about. And without referencing to answers like the below mentioned "There was an example of undiscussed change regarding shop=car which I mentioned twice on the mailing list". Instead of this again insinuate I wouldn't take part in the designated discussion process. And a kind of "sense of mission" similar to Jehovah's Witnesses (if you just repeat it often enough, it's getting true). (Sarkasm off) user:rtfm Rtfm (talk) 19:07, 4 December 2019 (UTC)
- No, I'm not the "wiki police" at all. This isn't a moral thing. Otherwise, instead of making vague accusations or personally attacking me point out a specific thing I have done that resembles me "policing" you or even comes close your spoon analogy. Your the one that keeps on others for not providing examples, yet ever message your written that I've seen is either extremely vague, needlessly sarcastic, or cites some irrelevant Wiki article (that usually undercuts your own argument). As far as your thing about the undiscussed changes to the shop=car wiki, I have zero opinion on that as I wasn't involved in it and didn't participate in the mailing list discussion. Which you know. I'm not here to defend other people's actions or explains action I have nothing to do with. Ask me something about an issue that I was actually involved in though and I'm 100% willing to discuss it. In the meantime, there's plenty of discussion pages where I asked you specific things that you have ignored, instead deciding to do this kind of personal, off topic deflection. For instance both my questions of how using ":" values doesn't cause the same exact problems you cite for why ";" values shouldn't be used instead. You also haven't responded to my points about the discrepancies in the motorcycle friendly votes. Especially your cognitive dissonance in asserting that voting is to obscure and cryptic for normal users to take part in while also claiming that all the votes from users with zero prior experience. Who's uneducated and incapable (according to your own logic) votes we should just accept. Both those things should be easy to answer. I don't give two whatevers about some choice quote from a Wiki article, that your probably taking out of context anyway, or some deflection about how there isn't a rule about it. I'm asking for your personal opinion on these things because your the one pushing them and claiming this stuff is OK. A few things are pretty clear at least, 1. You clearly don't take this seriously (resorting to personal attacks, sarcasm, deflecting, etc etc instead of answering simple questions) 2. You obviously could care less about other people's opinions (calling anyone who disagrees with you the "wiki police" etc). Neither of those are great. Personally, I've been pretty specific, kept away from personal attacks, and provided plenty of examples. You should do the same. Respond to my points, for example on the motorcycle friendly discussion page, in a clear logical way. Instead of just writing baseless diatribes about "policing" like your doing here. Otherwise, we aren't going to get anywhere and we will keep having problems. It's not that complicated. --Adamant1 (talk) 00:29, 5 December 2019 (UTC)
motorcycle_friendly=yes
Hi Rtfm,
I have moved the feature documentation pages to our user namespace (another user moved them to the Proposed namespace later) because the proposal of that tag is not accepted and most people think that this tag should not be in OSM because it is not verifiable (you participated in those discussion in March and Octoer 2017 on the Tagging mailing list). The lack of an accepted proposal and that lack of verifiability was the reason why I removed motorcycle_friendly=* from all wiki pages where it was mentioned (except its documentation page and the proposal itself).
I hereby ask you acknowldge the decision of the majority and revert your revert of my deletions. --Nakaner (talk) 17:01, 4 February 2018 (UTC)
- In addition to stop your edit war, I kindly ask you to control your language, and stop calling actions of users that act in consensus with the community, after long and detailed discussions, with correctly attributed reasons, as 'vandalism', as you did here in the wiki, and here in a mailing list. --Polarbear w (talk) 21:31, 4 February 2018 (UTC)
- Thanks for the nice example. "consensus with the community, after long and detailed discussions" is just ridiculous looking at this [1] discussion, could you please explain where's the "consensus of the majority" ? I also asked Nakaner on the discussion page of the (original) Wiki page to explain why he thinks the deletion is necessary (and the "majority" there were max. 3 people) and got no answer. I wouldn't call this a democratic decision, And by the way, the definition for "vandalism" is "Action involving deliberate destruction of or damage to public or private property" [2]. What do you think of this Any_tags_you_like#When_to_create_a_proposal definition ? rtfm Rtfm (talk) 18:47, 5 February 2018 (UTC)
- Please point me to all postings in favour of your tag. Please revert your edits as I asked you to do if you are unable to give links to postings by at least three different users. I won't revert them myself, I will ask a sysop to do the job because you would just continue an edit war with me. --Nakaner (talk) 20:07, 5 February 2018 (UTC)
Just for reference, Frederik Ramm, who is also a member of the Data Working Group, has posted a statement about the two tags in question on the tagging list. --Polarbear w (talk) 10:21, 6 February 2018 (UTC)
- Thanks for the hint, already replied. The reason for the removal from the wiki remains still unexplained. To make the question clear : What is the motivation / purpose of these activities ? user:rtfm Rtfm (talk) 16:14, 10 February 2018 (UTC)
- "still unexplained" - see Mateusz Konieczny (talk) 16:41, 10 February 2018 (UTC)
vandalism
Please, be aware that well-meaning edit, even ones that you disagree with is not "vandalism".
Also, you may be unaware about this but [sockpuppetting] (creating online identity used for purposes of deception) is considered as a Very Bad Thing To Do. Mateusz Konieczny (talk) 07:43, 6 February 2018 (UTC)
- In other words: RTFM Mateusz Konieczny (talk) 07:44, 6 February 2018 (UTC)
- I assume terrorists would also call themselves "well-meaning" in their point of view. Please explain why you think this is well. Please also consider that Defamation is also a "very bad thing". user:rtfm Rtfm (talk) 16:14, 10 February 2018 (UTC)
cosmetics:type=* etc.
You added cosmetics:type=*, cosmetics:sales=* and lots of other tags today to Tag:shop=beauty. However these tags do not appear in the database or are rarely used. I doubt that the vast majority of the people who participate in tagging discussions recommend their usage. Wiki pages in the main namespace should only mention tags which are in use or have an accepted and valid proposal (you should know that already but I mention it here again). Could you please either point me to a proposal with a valid and fair voting about these tags or remove them until 2018-02-28 20:00 UTC? --Nakaner (talk) 19:58, 25 February 2018 (UTC)
- I'd like to add that you already added these tags as "Proposal" to the main page in Oct 2017, where I had removed them for the same reason as Nakaner mentions above, and referred you to the discussion page. Instead of discussing them, you re-added them to the page, which I consider an edit war. I was hoping that after recent discussion with the Data Working Group, you would stop such edit wars, and stop adding elements to the documentation that are not used, not needed, and/or not verifiable. --Polarbear w (talk) 21:37, 25 February 2018 (UTC)
discussion invitation
Can you respond at ? Mateusz Konieczny (talk) 22:37, 25 February 2018 (UTC)
- Hi, missed this one, does this answer your question ?
- In other words: there were several attempts to clarify this via mailing list, but no constructive result. Instead of preventing development, try constructive input as an alternative. user:rtfm Rtfm (talk) 11:15, 11 May 2018 (UTC)
General advice
I would reconsider your username and I would strongly reconsider linking to "read the fucking manual" Wikipedia article in your discussion page. This kind of comments is justifiable only if you are expert in some topic and other are asking for free advice, without any effort on their side - and even then it is quite hostile.
In other situations it is just hostile and in addition you claim to be entirely aware about situation, effectively asking to not give you any benefit of doubt. Mateusz Konieczny (talk) 09:01, 26 February 2018 (UTC)
page blanking
Please, stop blanking I am not sure why you are so determined to remind everybody about it - I suggest to move on and stop reminding everybody by erasure attempts Mateusz Konieczny (talk) 07:39, 11 May 2018 (UTC)
- Look who's talking... Who destroyed this page without discussion on the mailing list ? And making wrong accusations ? There were already several translations, so obviously there were people who found it useful. AFAIK there's no rule how many edits a user needs to have done before voting for a proposal. And in general, a proposal is just needed in case it affects other interests Any_tags_you_like#When_to_create_a_proposal. So what is your interest except playing sheriff ? user:rtfm Rtfm (talk) 11:23, 11 May 2018 (UTC)
- "Who destroyed this page without discussion on the mailing list" - from looking at it seems to be done in and Mateusz Konieczny (talk) 14:10, 11 May 2018 (UTC)
Tag:brand=Harley-Davidson
Im pretty sure the wiki isnt suppose to be a repository of brand information thats not related to OSM or mapping in anyway. There's millions of other sites for that. OSM is not a product database either (there was a thing about listing specific product inventory in stores a while back that was rightly knocked down). Anyway, the brand key is also not for the purpose of listing every possible product a place might sell. Its also stretching the definition of a tag to say brand=Harley-Davidson is a defacto tag in the first place. There are no "defacto" name tags. Therefore, im going to request the page be deleted, along with other motorcycle brand tag pages you created, and Im also reverting your edit on the brand page about motorcycle shops for good measure also). Feel free to report me to an admin if you have an issue with it. --Adamant1 (talk) 04:00, 12 January 2019 (UTC)
- If you're only "pretty sure" you should possibly discuss that on the mailing list ?
- The current documentation should then be enhanced to be sure :
- rtfm Rtfm (talk) 04:54, 14 January 2019 (UTC)
- Really? That's pretty funny advice coming from you. You must be one of those people that go off about how bureaucracy and rules are BS only when other people expect you to follow them, but then expect everyone else to do everything "proper." Don't expect other people to follow standards you have no respect for. Follow your own critism of others and don't "Insist on doing everything through channels."
- Anyway, its not like if I request the pages be deleted the admin can't figure it and just not delete the page if your correct.
- Btw, the bad thing about the whole "ignore all rules" thing is that if you apply and push it, it can be used against you since you set the precedence. So, someone can easily create new tags to replace yours, then spread them around and retagged your tags with them. Then what? You can't cry foul or report them not following the rules (Well, you could but it would be massively
- disingenuous and backfire misserably). That's why its better to have rules. Maybe it doesn't go your way all the time, but then when it does someone can't come along and derail things as easily without there being serious consequences. That's the trade off of "channels", and the other things you have problems with. They keep this from being complete anarchy (Adamant1)
- I'm glad you seem to understand my sense of humour ;-)
- Regarding the "democratic tools" within OSM I think there should be a "technical and organizational update" to include the meanwhile more than 5 Million contributors (which are possibly not willing to follow all mails or to edit page code in case of proposals). The current implementation is IMHO a little "eighties", may have worked when there was just a small number of participants. Currently not very democratic as only suitable for nerds and similar people.
- rtfm Rtfm (talk) 12:10, 17 January 2019 (UTC)
- I mostly agree. I got a good amount of condemnation from some of those types a while back when I dared to suggest that voting on proposals was outdated and not popular anymore, but it is what it is. Know matter how much I might hate the protracted procedures and having to deal with a bunch of people to make simple changes, which I totally do, its how things are and its better then anarchy, or for that matter authoritarian rule of these things. Which I have heard suggested and would make it impossible to change anything). The reality is that the more OSM grows, the more restrictive it will be because they have a vested interest in keeping things stable. Ultimately we are either forced to work with it on their terms or hasten it through our disobedience. Its your choose which one you want to do, but I rather air on the side of the caution with the believe that it will buy some time before the crack down.
- There will also always be the few elite, in crowd, users who think their opinions are supreme and have the clout to push things in the directions they want. It might be extremely annoying and stifle progress in a lot of cases, but its just something you have to deal with along with the other mishegoss. Such is life. Whatever the case, I'm still against your tagging scheme. So I'm going to do what I can to voice my opinion against it and also deal with edits to the wiki that don't stay within the rules. I'll pass on participating in the tagging mailing list to do it though ;) --Adamant1 (talk) 10:26, 22 January 2019 (UTC)
Tables
If your going to push your tagging scheme everywhere at least don't use tables and explain what the hell your tags mean and how they are used, because as you currently do it is pretty confusing. "services" as a section heading without a further explanation is nonsensical. Also leave out the extra "use" section and whatever the other one is. They are both spammy and again don't make sense within the contexts of the articles. If its a section on "use" it should tell how the tag is used, as in how to map it. Which is the purpose of this wiki. Not how to show the tag on some map app. Finally, don't create internal links to articles that just link to the same page the link is on. Its really dumb, miss-leading, and a common tactic used by shills on Wikipedia to make their fake articles look more notable than they are (which I'm sure you know and is the reason your doing it). Btw, don't edit war people for putting stuff you don't like if its true. If your going to decry things not being "democratic" on here, don't act so unilaterally and respect other people's opinions. The general opinion by other users is that your tagging scheme is trash and that you should't push it in the disingenuous way you do. That's democracy. Suck it up and deal with it, or remove the mention of its importance on your talk page. Nothings worse then that kind of hypocrisy. --Adamant1 (talk) 15:26, 21 March 2019 (UTC)
- I'm not sure what you're talking about, perhaps some "internal links" could illustrate that ? With "don't use tables" I assume you mean this useful wiki function ? Taginfo/Taglists#Embed_tag_lists_in_this_wiki RTFM Rtfm (talk) 16:45, 22 March 2019 (UTC)
Vandalism
Don't engage in an edit war and accuse me vandalism. If you have a valid argument for why the sections that I said were not relevant to the article state it in the talk page and have a discuss about it instead of just reverting me. I think my edits were 100% valid. As I said, the "use" section should be about how to use the tag, not on how to find it in mapping apps. I also have every right to add the alternative for motorcycle:clothes and to fix your links that don't go anywhere. As Mateusz Konieczny has a right to mention the low use and none wider acceptance of your tagging scheme. Either be civil, discuss things, and don't try to own articles. Otherwise, I'll report you to an admin or the DWG for edit warring. See
- When we're already talking about "low usage", isn't it better to let the usage be counted automatically ?
- motorcycle:clothes
- clothes=motorcycle
- And could you please explain why clothes=motorcycle should be better than a namespace ?
- Regarding the usage topic I partly agree, this should be in a section similar to Comparison_of_Android_applications, but with the supported POIs shown such as in OsmAnd#Examples_of_OSM_POI_categories_supported_in_OsmAnd. If you got a good idea how to design such an overview, I'll be glad to delete the info on the motorcycle page and to link to the other one instead. Rtfm Rtfm (talk) 13:01, 24 March 2019 (UTC)
- "isn't it better to let the usage be counted automatically" The problem is that usage isn't an indicator of anything since it's pretty easy for a single to import a bunch of one tag quickly. Which is what it looks like happened with motorcycle:clothes. Otherwise, both would probably be extremely low. As there's probably not an actual need for the tag in the first place. Since motorcycle:clothes was artificially inflated and know one ever used clothes=motorcycle.
- "could you please explain why clothes=motorcycle should be better than a namespace" Explain to me why a name space is better then clothes=motorcycle. The clothes=* tagging scheme has been around forever and has worked great without issue so far. Your the one proposing a new tag. So it's on you to prove why it's better then the established tag. It's not on everyone else to prove why the established tag should be used. That's the kind of backward reasoning people without a valid argument make. "I don't have to prove there's a god. You have to prove there isn't. Ha!!." Whatever. We could do that with anything and we would never get anywhere. Your the one pushing the new tagging scheme. Either make a good argument for why it's an improvement or stop pushing it. Period.
- "If you got a good idea how to design such an overview, I'll be glad to delete the info on the motorcycle page and to link to the other one instead." Your the one that wants to show that information and thinks its useful to show. Know one else here does. So you design the way to make the overview not intrusive. It's not our job. Its the same as my point above. Anyone can't just come along and put whatever badly designed, not relevant thing in an article they want, expect everyone else to make it look good for them, and then refuse to have it deleted. Sorry, but that's not the way it works. There has to be some basic standards for what's allowed in an article or not. Before you moan about "well where's the rule then?" again, with some things it's just a given that they shouldn't be included or else you need serious community buying to do it. This is one of those things. Sorry, but that's life. Like I said above, either put the effort into getting other people on your side with this stuff or stop with it already. Those are really your only options. Otherwise, your going to keep getting the push back from everyone that you are and it will continue being a needless uphill battle until you cooperate with people. --Adamant1 (talk) 10:04, 3 November 2019 (UTC)
shop=street_vendor spam
Please stop linking this specific shop everywhere. What is the point of linking it from say ? Mateusz Konieczny (talk) 07:35, 7 April 2019 (UTC)
- I'd say a greengrocer got a typical street_vendor background :.
- IMHO related articles in the wiki should be linked ?
- Strongly related ones, not everything mildly related. And especially adding description of weird alternative tagging schemes on pages documenting established tagging is a poor idea. Mateusz Konieczny (talk) 19:24, 7 April 2019 (UTC)
Please stop spamming
Inventing new tags is OK, using new tags is OK, documenting new tags is OK and desirable.
But linking your tag that is barely used everywhere is an unwelcome spam.
Please stop doing that Mateusz Konieczny (talk) 06:30, 27 April 2019 (UTC)
- I have no clue which tag you're talking about, got it something to do with this list ? user:rtfm Rtfm (talk) 07:08, 27 April 2019 (UTC)
- LOADING TAG LIST... (If you do not see this tag list, you need to enable Javascript)
- In this case - yes, though problem is more general. Mateusz Konieczny (talk) 08:41, 27 April 2019 (UTC)
- Is it to complicated to just tell me what you mean by "linking your tag that is barely used everywhere" or is it a tactic to stay incredibly vague ?
See "it is desirable to have objective criteria for tagging" - the same IMHO applies to the wiki structure and contents. In case there's "objective criteria" which I didn't respect, just send me a link to it. user:rtfm Rtfm (talk) 08:52, 27 April 2019 (UTC)
- Tag that is rarely used (Key:dinner, Key:stove, Key:balcony, motorcycle_friendly=yes) should not be linked from pages describing tags that are actually used. It is OK to link and describe actually used and useful tags Mateusz Konieczny (talk) 09:37, 27 April 2019 (UTC)
- ROFL. And how should they get used if they are hidden ? Please send me a link to the page which describes how many usages are necessary to document a tag. Dinner is just a (logical) completion to breakfast and lunch, stove to fireplace / washing_machine etc., balcony to terrace. All the tags had already be used and weren't "invented" by me. This should ease the overview for others to avoid they need to search in the wiki (or taginfo) for every single option. That's just time consuming and annoying. I'm sorry, but I can absolutely not understand the "logic" you're following. Seems you just want to avoid an overview for new users. So please help everybody to save time and send me the link to the wiki rules which prevent overview tables with useful tags. user:rtfm Rtfm (talk) 11:32, 28 April 2019 (UTC)
Fundamental communication problem
From reading this discussion page (and other discussions about your changes), I get the feeling that there is a fundamental communication problem at work here.
That you think those people on your internet pillory at the start of your discussion page are acting in bad faith and want to destroy your work by constantly nagging about your well-meant attempts to harmonize tagging practices and document how to tag things.
I can understand if you get defensive about and insinuate that people apparently questioning fundamentally how you work, mean you harm. It is only human to react this way if things get personal. So, the key is to not get personal but instead always communicate in a factual and polite way. In fact, collaboration at such a large scale as OSM is only possible if we uphold a certain code of conduct. So, I'd like to kindly ask you to remove this internet pillory and adhere from personal attacks in the future. The same of course holds true for your "adversaries", the tone really made a turn for the worse on this page, from all parties. People can get banned over unacceptable communication behavior, not only vandalism in wiki or the map.
A reminder to anyone reading this: We are all pulling together here, and we can assume that all of us have good intentions moving forward with OSM, so quarrel and disregard for another would only be a really good way to sabotage ourselves.
That being said, the fundamental issue you are reproached for again and again by different people, is, basically, you skipping the rules, the democratic process: Disregarding failed proposals, documenting new features and attributes without prior discussion or proposal. We had a chat about this via some private messages on openstreetmap.org in autumn 2018.
I want to suggest to you that before continuing to edit the wiki in controversial ways, try to seek peace with those complaining first by politely discussing in general what is the problem with the current practice and how to come to terms. This requires of course that Mateusz Konieczny, Nakaner etc. are willing and able to discuss it in an equally civil way. I have worked with Mateusz Konieczny closely before and I experienced him as a very precise and considerate person, so I have no doubts in that at least he would be able to, given a nonviolent precedent.
Let this section be a starting point for this. Alternatively, talk to each other in private, but please do! I am sure what you told me in the private messages back then is something he/they can very well empathize with --Westnordost (talk) 18:59, 28 April 2019 (UTC)
- I agree there's obviously a communication problem. But there's also a lack of standards when a tag may be documented. And I got a problem when people say "not like that" but not provide an alternative how to solve it instead. user:rtfm Rtfm (talk) 20:19, 28 April 2019 (UTC)
"Pseudomoves" of pages
I just bumped into a page update of Key:bbq=yes commented with page moved from value=yes to bbq, taglist added. It looks like you copied over the content and changed the redirects, is that correct? If yes, I would prefer if you would use move when performing page moves. You can find it below More v. This will keep the page history together and also enables moving back in most cases. Thanks for consideration. --Tigerfell
(Let's talk) 21:31, 29 April 2019 (UTC)
JA:Tag:bbq
Hi, there is a problem with JA:Tag:bbq -- it should be a key, not a tag page, but JA:Key:bbq already exists. Thanks! --Yurik (talk) 02:31, 17 June 2019 (UTC)
- Yes, this was a mistake, but I didn't know how to remove the page. It may be deleted. user:rtfm Rtfm (talk) 17:41, 17 June 2019 (UTC)
Please, do not add description of unused or barely used tags on pages describing other tags. If you want to propose/document the, please create a new separate page Mateusz Konieczny (talk) 20:43, 17 August 2019 (UTC)
A few thoughts
While I don't think there should be a certain number of uses for a tag to be "documented" (as in creating a Wiki article for it), I do think there should be "some" usage of a tag before it can be listed in a table of additional tags on a main tags page. As
1. Through association it then looks like a good (or the only) add-on tag for expressing the existence of that object on the map when it might not be (in other words, there could very well be other, better, alternatives that people look into due to going with the "default")
2. By listing any tag regardless of usage pages could easily turn into an unmanageable list of none sense "brain stormed" keys that might sound good in practice, "because I thought of it and show me a rule saying I can't include it", but where know one else actually wants to use or cares about the tag. Which seems to be the case with a lot of the whatever:whatever tags you have come up with and added to main pages. Especially the more obscure ones where your the only one adding them to the map. Probably as a way to make them look "legitimate" when they aren't (for example see some of the "services" tags on the shop=atv page). As it is, main tagging articles are a "community" space. We all add those tags to the map and the tags documentation on those articles should reflect our collective opinions about how to use the tags and "supplemental" tags are best to add with them. So, there should be at least some "community buy in" with what your doing and there isn't. So it either shouldn't be done or done differently. Otherwise, there will always be resistance and undermining of what your doing, know matter how right you think you are about it (btw it's a hard lesson I've had to learn over the last few years. Not following it by "doing my own thing" has led to many missteps and me mostly being alienated from the community. Which hasn't been good. It's been a waste of everyone's time including my own and probably set things on OSM back to a degree that it shouldn't have).
3. there's potentially an endless amount of possible tagging combinations. Maybe there isn't an official rule, per say, about how many uses a tag has to have before being documented or included in lists of tags on main tagging pages, but I'm sure everyone including you would agrees there has to be some kind of content standards on the pages of widely used and accepted tags. Otherwise, it would be easy for the articles to get out of control by turning into a bunch of tables listing millions of the possible tags we could use (but probably shouldn't). Which ultimately just hurts the map and damages the ability of people to tag things on it correctly due to not knowing what is "good tagging" or not.
4. The whole point in main article pages is to be used for describing how to use that tag. They are not intended to be used as a "clearing house page" where every tag that might be semi-related to it is dumped. If you want to create lists of "possible" tags, create list articles for them like Wikipedia does and then maybe link to the articles with the list on the main tags page. That way the tables at least won't "flood" the pages. By doing that, you can also create subsections for each tag describing it, instead of expecting everyone to know what or how the tag is used just because you do, and people could also discuss each tags pros and cons on the list articles talk page instead of having to just be against all of them through conversations that take place on the main tags talk page or here. Where they really shouldn't be discussed because it just devolves into arguments over "rules" or whatever kind of deflection happens. At some point there should be some kind of discussion about if the tags you keep adding and promoting are actually worth using or not. Even if you don't think there should be a discussion about it. The fact is that your the main (only) person pushing the usage of these tags. Lay people will use whatever tags we (including you) recommend and endorse. So at some point talk has to go beyond "but, but, people use them" to if they are actually good tags to use or not, and there's plenty of other (probably better) possibilities out there. It might also be just as likely that these things don't even need to be tagged in the first place. Which could explain why some of them have such low numbers or no preexisting tags for them (not lack of exposer).
--Adamant1 (talk) 08:05, 3 November 2019 (UTC)
- Thanks for your thoughts. Unless it would be easier to understand if you'd given some examples (probably by linking them).
- Regarding "everyone including you would agree there has to be some kind of content standards" I totally agree. Especially regarding namespaces. There was an example of undiscussed change regarding shop=car which I mentioned twice on the mailing list. There was some reaction, but no action has been taken (up to now). I can't understand why a tag with 105 763 entries gets so little attention but some "fine-tuning" tags do. So I think the best example to show me this "discussion" thing works would be to take care about this tag. While doing that, a principle of namespace usage could be established "there has to be some kind of content standards" to avoid the same discussion with every new key (and especially a total mess as in the shop=car example. user:rtfm Rtfm (talk) 12:44, 3 November 2019 (UTC)
- I thought I was pretty clear about my points, but maybe I wasn't. So, what exactly do you need an example of?
- Regarding "I totally agree. Especially regarding namespaces." Know one that I have seen in these discussions is arguing against the practice of namespaces as far as it being "a system of tagging." The issue people have is with the specific tags you come up with and how you push them without discussion. That would still happening if they were "namespace" tags or not. It seems like you refuse to address specifics though and keep deflecting to "the system." It's not about the system. It's about your particular implementation of it.
- In regards to "there has to be some kind of content standards" to avoid the same discussion with every new key", again your getting it wrong. Know one cares "new keys." They care about specific things you do with them on articles and how you treat them as more legitimate as they are. Anyone can use create a new key and use it on their own without any discussion being involved. The discussion comes into it when the person tries to enter that key into the "public sphere" as its widely accepted, on "community" ran and maintained pages. I'm not sure why you don't get that distinction, but if you had of just used the keys on your own without pushing them on everyone else there wouldn't have been a problem.
- That said, for every (well most) widely accepted tag it does take some discussion and time. It's a slow process and takes community involvement, but there's good reasons for that. You can look through all the failed proposals to see where people have gone wrong with tags by not thinking them through properly, creating unnecessary duplicates, or by just creating tags that know one will use. There's never going to be a point where it's "name spaces are cool so lets just come up with whatever combinations of millions of name space things we can think of and plaster them everywhere" like you want it to be. There has to at least be some discussion on some level of which namespace values are worth using and which aren't, and as you've already been told multiple times by me and other users just mentioning a namespace key you plan to use on the talk mailing list in an un-related discussion doesn't cut it. You have to go through the proper process.
- Like I said above, my guess is that most of the namespace keys you came up with are completely worthless and know one will ever use except you. Until they are widely used or you go the proper processes for them to be adopted though they shouldn't be mentioned everywhere like they are. In most cases just because we can do something doesn't mean we should, and I think that goes for a lot of your namespace keys. Maybe some of them are worth using, but you won't even have a discussion about it or let people figure out which ones. There's where the process comes in. As it currently is though, we are forced to throw the baby out with the bath water since your not proposing the tags probably. --Adamant1 (talk) 13:28, 3 November 2019 (UTC)
- Again, it would be easier with some examples (probably you understand it better as "it is desirable to have objective criteria" ?).". And it seems you didn't even have a look at the shop=car example (The first part of "Tags used in combination"). Also see "undocumented and formerly unused tags via preset without any discussion or proposal process is something I didn’t expect from the main osmf endorsed editor" ? user:rtfm Rtfm (talk) 18:45, 4 November 2019 (UTC)
- Did you miss the first part of my message or are you just being obtuse on purpose. Like I said, "what exactly do you need an example of?" How exactly is your link to Verifiability related to the discussion? As far as the shop=car. I'm not sure what your deflection to iD Editor's mistake has to do with the value of your tagging scheme. Both can be crap. Something you come up with doesn't magically become fantastic because someone else flubbed up trying to do a similar thing. Like I've said already, instead of endlessly deflecting just state why your tagging scheme, and more importantly the individual tags, are better then what's currently being used or stop pushing them. It's pretty simple.
- Btw, your tagging list link 404's. Which is fine, because I have followed all the discussions about iD Editor's mistakes (I probably know more about it then you do as someone who is a dev on multiple projects related to iD Editor, OSM in general, and therefore follows this stuff to an often nauseous degree. Like I said already, it's not relevant to your actions here or how you should remedy things by following the process. If your tagging scheme and the tags you push where really that great there would be know reason why you wouldn't just follow the procedure. Other people would be using the tags to, but they aren't. Even if you plastering them on every #R$% page you can. Think about it. --Adamant1 (talk) 10:08, 12 November 2019 (UTC)
- Oh yeah, one more thing. If you bothered to read the When NOT to use semi-colons thing you would have noticed that all the same things apply for your name:space thing. "It is particularly important to avoid tags which define what an element is." That's exactly what your namespace tags do. "there are normally a couple of alternative approaches 1. Choose one of the values Take the overriding "primary" value, and go with that. 2. Split the element Separate things out into distinct features to allow them to be tagged separately with normal tags." Your name name-space tagging scheme doesn't do or allow for either of those things. So the same exact reasons you gave for why the semi-colon thing is wrong also apply to your own alternative. Ten bucks says you just ignore that and deflect by linking to some other irrelevant crap though. Christ this whole thing is ridiculous. --Adamant1 (talk) 10:25, 12 November 2019 (UTC)
edit wars
Please, see with "There appears to be an edit war on this page. I have now locked this page against changes for 3 months; please use this time to discuss the topics that you are disagreeing about on the discussion page.". Please reply to discussion at bottom of Talk:Proposed features/motorcycle friendly/tag description rather than restart edit war Mateusz Konieczny (talk) 23:02, 16 November 2019 (UTC)
- Hello Rtfm, I have seen you are reverting the reverts to the motorcycle friendly page. Some parts of this paragraph can hardly be questioned (e.g. that the voting wasn’t announced on the tagging mailing list), others are not completely proven (that some participants of the voting are sock puppets for example). From looking deeper into the case I agree that 12 out of 13 fake accounts seems too much, while it still is suspect that most of these people have been inactive or almost inactive apart from this voting. Do you admit some of those accounts have been created by you (or have been asked by you to create an account and vote in favor), or do you reject thoroughly this accusation?—Dieterdreist (talk) 07:44, 25 November 2019 (UTC)
- None of these accounts have been created by me (except rtfm, of course) and I'm willing to take legal actions if this farce is going on. user:rtfm Rtfm (talk) 18:07, 30 November 2019 (UTC)
- Do you know some of the people who have voted? —Dieterdreist (talk) 01:19, 1 December 2019 (UTC)
- A bunch of Rtfm (talk) 16:13, 1 December 2019 (UTC)
- sorry, but this confirms the sock-puppet-theory. These accounts may not have been created literally by you (and it was maybe not all those that have been claimed but just a “bunch”), but effectively these were people not involved with OpenStreetMap who have acted on your behalf. Also that the voting wasn’t announced on the channels where it should have, is a fact. —Dieterdreist (talk) 16:35, 1 December 2019 (UTC)
- Not so sorry on my behalf. What's the minimum amount of edits a user must have to vote ? Are there also any other limits such as age, nationality or similar ? Please also check the definition of Sockpuppet : pseudonym. These were people acting in their own interest as they would appreciate (the usage of) this kind of key. But they are not so keen on the usual kind of (theoretical) discussion here. And as long as the key isn't established they won't edit so much. Kind of a hen-egg problem, you see ? user:rtfm Rtfm (talk) 17:12, 1 December 2019 (UTC)
- I welcome every contributor, but this is not what the voting process for new tags is about. In the end, the success of a tag can only be measured through its adoption, not through voting. Voting is about finding a suitable representation (which tags should exist and what do they represent). Having people vote which have no idea about tagging in general and about other tags and about the OpenStreetMap project in general, is not helpful because these people cannot assess whether a tag is suitable. It may not be written anywhere, but it can be seen by looking at the purpose of the process. This has nothing to do with discrimination on the basis of age or nationality. People who are not interested in discussions on tagging should not participate in such discussions (i.e., should not vote). —Dieterdreist (talk) 22:13, 1 December 2019 (UTC)
- I get your point. Transferred to a city this would mean those who aren't part of the council (discussions) haven't the right to vote in a referendum ? And this is a kind of "implicit rule" without a need to be defined ? And a member of the building yard should not ask them to participate in voting ? And those who infringe this "implicit rule" should be put to a pillory with a sign around their neck "guilty for voter fraud" ? user:rtfm Rtfm (talk) 18:45, 4 December 2019 (UTC)
- RTFM "I consider the voting system not as democratic as it requires a minimum of IT knowledge as of the handling." The difference IMO is between asking people who wouldn't normally vote, but know how to and are educated on the voting/tagging process, to participate in it and leading a bunch of completely illiterate uneducated voters by the hand to the a voting box so they can check a box for your initiative that they know nothing about. The latter is wrong, misleading, and exactly (if we take your word that they aren't shill accounts) what you did. It isn't any less wrong if it might benefit the voters to vote for the tag or even if you could make the argument that if they did understand the process etc that it was how they would have voted.
- Your saying it takes special "IT knowledge" to vote that most people don't have. Including by your own standards the people who voted for the motorcycle friendly tag. But then your also saying that we should accept those incompetent (your standards) users votes "just because." You can't have it both ways by discounting the voting process as to obtuse etc when it doesn't favor the outcome you want, but then also accept it when it would benefit you. Let alone is it OK to have an extremely low or no standard of who can vote when it's your proposal. But then refusing to go through any other processes and criticizing them in any other situation, voting or otherwise, as you have done.
- Btw, my standard would be that users who vote should at least come to the proposal organically instead of being "funneled" or coxed there by a user that wants to see their proposal passed (as is the case here). They should also have accounts on the main website. Which none of the users who I looked into had. If they aren't actual mappers (as these people weren't) there's zero legitimate reason for them to vote on a tag IMO. Even if there is no explicit rule about it (there should be one though). I see absolutely nothing wrong with a standard of "minimum knowledge and participation" to be able to do certain things that could potentially be detrimental if done wrong or that require a minimum of "community buy in" to be implemented. Ultimately voting should at least represent the wishes of the community (that's the main point of it). You can't say people with zero wiki edits and no OSM accounts are the community or that their opinions should be counted in that regard. --Adamant1 (talk) 01:27, 5 December 2019 (UTC)
Edit warring on shop=car article
I was pretty clear that I deleted the section because of the lack of neutral. It didn't have anything to do with "not getting it." Therefore, in response to your edit warring I rewrote and expanded the article to be more neutral and discuss the pros/cons of both tagging schemes. Ultimately, neither scheme is good IMO. Really, the car repair shop should just be mapped separately to abide by the One feature, one OSM element guideline. Which neither tagging scheme allows for. If you revert my edit again I will report you for edit warring. If there's something you feel needs more explanation feel free to add it, but I think how I rewrote it is a middle ground and allows people to choose the best option on their own. The other stuff about what iD Editor did etc is irrelevant, confusing. So I left it out as it doesn't need to be included as far as I'm concerned. Since I was able to explain things fine without it. At least I was willing to compromise by re-writing the section of the article in a way that gives equal weight to everything and doesn't favor one tagging scheme over the other. That's more then you can say. If you have any other problems with it going forward, start a discussing in the articles talk page about it. We should be discussing these things so we can come to an agreement on what's best for OSM tagging. Instead of everyone just working against each other or a single person's opinion winning out, when it might not be the best way of doing things. --Adamant1 (talk) 04:25, 11 December 2019 (UTC)
- Thanks for acting constructive, unless I don't get your logic at all. Where did you get the sentence >>"It is particularly important to avoid tags which define what an element is." That's exactly what your namespace tags do.<< from ? Certainly all the taggiung is about "what an element is". And your argumentation with One_feature,_one_OSM_element#One_feature_per_OSM_element is also "pro-namespace", as it avoids to have several shop types if the shop offers not just cars, but also repair and other stuff. See the example in the same wiki article ("instead of using the separate feature tag"). user:rtfm Rtfm (talk) 11:30, 16 December 2019 (UTC)
- Totally. I got the sentence from the Semi-colon value separator article that you keep citing. It was in the second sentence of the "When not to use" section until December 7th when the user Mueschel removed it and added "On important "top-level" tags that define what an element is" to the top of the section instead. Its the second edit in the articles changeset history if your interested. I'm not really sure what to say about it now though. It's hard to make an argument based on how the defines things when people are chopping out important sentences from one moment to the next. I'd say my criticism of name space tagging in the situations where you use them still stand though.
- They still have the same issues you rail against colon value separators for and that are listed as why they shouldn't be used. They don't allow for Choosing one of the values Your just throwing a bunch of values at the element to see which one sticks, even if there's zero evidence its actually useful to the map user (I.E. Taking the "mapping every blade of grass" approach). Instead of "Taking the overriding "primary" value, and going with that." Which will always necessarily come at the cost of not tagging the element with the none primary value (unless its mapped separately which again going with the name space doesn't allow for). Also, it doesn't allow for "splitting the element." My best example for that is your namespace tag for motorcycle parking "motorcycle:parking." Which has been added to a bunch of extremely large campgrounds at the cost of mapping the individual motorcycle parking objects, but its also an issue in the instance of shop=car where the car repair could be mapped separately but isn't if we are using your scheme. Noticing on the one element on OSM object page it says "if the building (or whatever) has a clear primary feature that can be said to contain the other features, the primary feature can be tagged on the building itself, and other features mapped inside the building perimeter. (Eg, a restaurant inside a hotel, shops within a shopping mall.)" Maybe the name space scheme works with that in some cases, just not in yours.
- Your missing the important point that know one here is completely pro or con namespace or semi-colon value separator. They are in certain situations where one, the other, or neither works out. That's it. It's not a matter of pitting one tagging scheme against the other. Namespaces work great for mapping things like addresses or social media contacts where they can't be mapped as separate objects. They don't work good in situations where the object can be mapped separately though like with shop=car places that also have car repair centers that are different objects or in campgrounds that have a motorcycle parking area that should mapped separately. Whereas, colon value separators work great in situations where its not a top level tag like with the cuisine colon value tags, but not in situations where its at the cost of top level tagging not being mapped separately like with "amenity=library;cafe." Which 100% also largely applies to your usage of the namespace scheme. It largely depends on context. That's all the wiki or anyone here has said. Know where does the wiki say to completely toss out colon value separator tagging or that namespaces are always the better or best alternative. Maybe neither one. The only issue anyone has had is with your specific usage of namespaces over colon separated values and top level tagging schemes. That's it. Your particular usage of namespaces, which hardly anyone supports, is bad. Not the tagging scheme itself. I don't see how I can be any more clear about it.
- Your missing an important part of the namespace article at the very bottom ." I don't know how that can be more clear either on why your "turn everything into a namespace tag" plan isn't a good idea and doesn't work in most cases that your trying to do it in. --Adamant1 (talk) 03:40, 17 December 2019 (UTC)
- Did you ever have a look at the bicycle namespace ? Or the "great new" one the ID admins introduced ? It's all namespaces, and I didn't invent them. But you continue writing novels on my page instead of investing your time to get this standardized. For an overview of common namespaces see Namespace_tag_overview. rtfm Rtfm (talk) 10:57, 17 December 2019 (UTC)
- Yeah I did. As I've repeatedly and you keep ignoring, I've read everything and I know this subject way better then you do. Like I've also said multiple times now, just because iD Editor introduced a bad tag doesn't automatically make your tagging scheme the best or automatic alternative. I'm getting sick of saying it. Also, I'm writing detailed explanations because if I don't you make accusations like you did in the shop=car changeset comment about how I "don't get it." You can't have it both ways where you accuse someone of knowing the details and then go off about message length them for presenting those details. Plus, every time I've tried to explain things simply you twisted my words around and miss quoted me. So...Both telling people who disagree with you on something that they don't know anything and decrying the lengths of messages is a pretty common tactic on here by people that have no better argument though. So, I'm not surprised you'd go there.
- Anyway, back to the subject. Do you have an actual rebuttal to what I've written or is "I don't like your message length. Checkout this irrelevant Wiki page I created" it? As I keep saying and it seems like your intentionally ignoring, I have zero problem with using name spaces where they work. My issue is with the specific instances that you have decided to use them in. For the reason's I have repeatedly given and that you clearly have no rebuttal to. Your repeatedly attempts at keeping the topic away from your usage of the namespaces and attempts to make it about namespaces more generally proves you don't have an argument for why your in the right. I've clearly stated why I think your usages of namespace tagging is wrong in the particular instances that I've taken issue with, both in my message above and elsewhere. If your unwilling to address my specific issues and the specific examples for why namespace tagging isn't the best option in the specific instances that I've taken issues with it, I'm done talking to you about it. The same goes for the pros and cons of colon value separators or any other thing. I'm not discussing it unless your specific about it, stick to the subject, and actually respond substantively to my messages. If not, I'm done discussing things with you and we will keep having problems with each other on articles and elsewhere. If you want to have a discussion about standardization and the best way to do that, fine. Simply asserting that there's a need for standardization of everything when their might not be and that your way of doing it is the best way it also might not be isn't the way to go about though. I'm not just going along with "standardization" (I don't think your way of doing it is anyway) because you say to and I don't think anyone else is either. There needs to be a wider discussion about the best way to go about it first, that isn't just based on your personal opinions. --Adamant1 (talk) 04:03, 18 December 2019 (UTC)
Edit waring in shop=rental article
What specific fact are you disputing about what I wrote in the shop=rental article that 1. Couldn't have been discussed before being deleted 2. warrants everything I wrote getting deleted and edit wared? If nothing else you should have at least added on to it instead of deleting it if you disputed something, but I think everything I wrote is worth mentioning in the article. Especially the mass edits and editing waring of an admin by you to make the boast the numbers of the *:rental tagging scheme. Along with rental=* being an already established tagging scheme that has/had higher numbers before you screwed with it. So, what exactly do you take issue with or are disputing about what I wrote?
- @Rtfm:, what facts that you keep deleting in the shop=rental article are just "opinions"? You can't just claim something is an opinion as a justification for deleting another persons writing or for edit warring them. You have to say exactly what you think is opinion and discuss it. Especially since you edit warred me again after I started this discussion and didn't respond to it first. Your the one that's all about providing important details to mappers. If your not willing to discuss things, then everything you have deleted should remain on the page to do just that. Since it's 100% and relevant to which tagging scheme people should go with. --Adamant1 (talk) 06:01, 23 January 2020 (UTC)
Reverts without explanations are not OK
- In you removed some content and gave a reson for that - this is 100% OK
- In this edit was reverted based on other reasons - this is 100% OK
- In this edit was reverted without giving any reason for that - this is not acceptable.
If you have no reason for doing edit, then please do not do it.
If there is conflict - please start discussion on the talk page of an article (or edit page and give an explanation on the edit comment).
Mateusz Konieczny (talk) 10:08, 22 January 2020 (UTC)
- I'd also include in that. --Adamant1 (talk) 05:39, 23 January 2020 (UTC)
- Also, @Rtfm: it is inappropriate to mark edits as "minor" when they revert changes, e.g.: - perhaps you accidentally have your wiki settings such that all your edits are "minor"? Please check your preferences and fix this, so that other users will be notified about your edits when they change the content of a page. It is appropriate to use the "minor" edit setting when you fix a minor typographical error. --Jeisenbe (talk) 04:30, 12 March 2020 (UTC)
Rental namespace tagging scheme question
Currently with 681 uses rental=yes is the most used rental=* tag. How would it work with the namespace in cases where either the person doesn't know exactly what the place rents (just that it does) or where they just don't feel like adding the extra details (there's no obligation they do)? Would it be yes:rental=yes? Object_not_specific:rental=yes? Rents_things:rental=yes? Things:rental=yes? Object:rental=yes? Personally, my money is on yes:rental=yes, because that's totally a "refinement" of the rental tag and follows the KISS principle (fyi, that was sarcasm). Maybe you don't have an alternative to the rental=yes tag because it just doesn't fit your belief system to have one ;) Although, I assume you have thought it through enough to have one. Since you've been pushing the tag everywhere, doing undisclosed mass edits of it, etc etc. It would be really weird if you didn't account for a basic thing like how to tag something as just yes when doing all that or coming up with it. So, I'd really like to know what your great idea/suggestion about it is. --Adamant1 (talk) 06:43, 30 January 2020 (UTC)
- So you think all of this was me ? and probably also the other namespaces which are all built following the same principle, see Namespace_tag_overview ? This is for example from December 2013 :
user:rtfm Rtfm (talk) 19:09, 31 January 2020 (UTC)
- No, I don't think all of it was you and know where did say that. It should be pretty obvious that a couple uses of a tag can be in existence before someone else comes along to evangelize its use and push it on everyone through mass-edits. 100% if it wasn't for your evangelizing and screwing with the shop rental tag the other tagging scheme wouldn't even be a thing right now. It wasn't in 2013 when your citing. Anyway, I didn't ask about scuba diving tags. I asked what the comparable rental namespace tag is to rental=yes. Which you know is different. If you don't have one, just say so. Save the deflection and needless sarcasm for someone else though. As I've said before, if you can't even answer simple questions about your tagging scheme, and it is yours, without deflecting or being sarcastic (which is all you seem to do), then 100% its not worth supporting. So, just answer the question. What's the comparable rental namespace tag to rental=yes? --Adamant1 (talk) 10:35, 1 February 2020 (UTC)
rental article revert
I updated the rental proposal to better define what the tag is for, I plan to do an RfC and vote for it soon. As such, in the meantime outside of the "see also" section the rental article should only mention things directly related to the tag. Therefore, a discussion of a competing tagging scheme, that's what it is since you have mass deleted instances of rental to things you have used it on, isn't appropriate in the "possible values" section. Again, the namespace isn't a possible value of rental= anyway its a competing scheme that leads to its removal. However, I have at least linked to your competing (?) rental article, that seems to be just a list of random tags with the word "rental" in it. Although I'm not really sure what purpose it serves aside form giving you an opportunity to falsely accuse me of vandalism. None the less, I linked to it anyway in the sake of fairness. Hopefully that satiates your apatite for edit warring for the time being. At least on that article. As I need it to be relevant to proposal and without any off topic cruft. --Adamant1 (talk) 06:35, 2 February 2020 (UTC)
Claims of vandalism and personal attacks
Hello. I'd appreciate it if you refrained from personal attacks from now on as insinuating that people are dumb, internet troll, saboteurs is really against the spirit of OSM and its whole "assume good faith" thing. Also, falsely accusing others of vandalism on random articles isn't good either and shouldn't be done. Vandalism is a very specific thing and was not what I was doing. Especially considering I did my part to try and correct things through discussion, that you either refused to engage in or just used as a way to insult me. You can't just ignore discussions, insult the other user, and then call them a vandal just because you don't like their edit. If it is actually vandalism the proper thing to do is take it to an admin. With the thing your claiming was vandalism the admin that got involved didn't think so and sided with me. So, instead of making baseless claims everywhere, you should just leave it be. Otherwise, if you continue it and the insults I'll report you to an admin. --Adamant1 (talk) 06:47, 2 February 2020 (UTC)
- I would, if someone finally would tell me how to deal with |such situations without "feeding the troll" or accepting the "sabotage" User:Rtfm Rtfm (talk) 14:09, 2 February 2020 (UTC)
- Know ones going to tell you how to deal with it because its not a real problem. Its one you've completely made up because its much easier to invent a narrative that allows you to malign me or anyone else who comes along (there have been many) as a dumb person committing sabotage, then it is to take responsibility for your actions, discuss things, or compromise. The things you claim are sabotage and vandalism are 100% made up and caused by your disingenuous actions. As are the confrontations your constantly getting in with everyone, not just me. Which are also 100% on you and over made up, invented, none sense. --Adamant1 (talk) 08:20, 3 February 2020 (UTC)
Why exactly did you remove the links to the shop=scooter and shop=mobility_scooter pages from the shop=motorcycle article? This is getting extremely tiring.
Continued edit warring on shop=motorcycle
I'd appreciate it if you stopped deleting references to established tags. It's better to show what all the possible options are so people can decide what is best to use for their situation, instead of trying to whitewash references to perfectly valid tags. Your deletion of the clothes= tag from the page is a good example. It's more then established for use in that situation and is actually supported in apps. Whereas, your whole motorcycle:clothes=* thing isn't. Same goes for the rental=* tag. Claiming they are bad because they are old and using that as an excuse to remove them from the article is utter none sense. At least I'm willing to still have your tagging scheme in articles along with the other tagging options. Even if I think it's complete garbage. Whereas, I didn't even come up with the clothes or rental tags, they are used by many other OSM users. Who's tagging preferences deserve as much, if not more, representation then your utter trash "tagging schemes" do. --Adamant1 (talk) 06:49, 3 March 2020 (UTC)
- Rtfm, stop with the edit warring on shop=motorcycle. The clothes=* tag is more then established and so are the other ones. The edit warring isn't going to accomplish anything and it's just getting tiring. I reported you to Tigerfall for it. So hopefully it won't continue. --Adamant1 (talk) 09:03, 7 March 2020 (UTC)
- So, are you going to compromise by not doing the edit warring or deleting establish tags that other people want listed in the articles? Like I've said, whatever mine and others opinions are about your tagging schemes, no one is trying to remove all references to them from articles like your doing. At least I'm not. The only way forward here is to allow for listing the tagging options, along with their pros and cons, so people can decide on their own what tag to use. Everyone agrees its the way to do things and its how every other tagging article not loosely related to "motorcycle tags" is written. I fail to see why you find it so unreasonable to do the same here. So, are you willing to be reasonable and compromise? or is this just going to continue pointlessly? --Adamant1 (talk) 01:02, 9 March 2020 (UTC)
shop=scooter deletion request
your baseless insults and accusations aside, what about a shop tag is "disturb a namespace"? There is no shop:scooter shop:anything namespace that it would be disturbing. So your deletion request doesn't make any sense. Neither does your claim that it is "disturbing name space based logic", what namespace based "logic" is it disturbing exactly and if it is, why wouldn't it also apply to the shop=ski tag that you created an article for, that has namespace tags you invented? Or do your criticisms just apply to shop tags I create articles for and not ones you do? It's pretty ridiculous to pick and choose what shop tags you think are acceptable to use based on some arbitrary thing, like that they are "disturbing a namespace" whatever that means. In that case, you could just invent a namespace for whatever you want and then claim all shop tags are bad because you say so. At least in the case where I took issue with the neutrality of your motorcycle tag overview article, neutrality is an actual thing and other users agreed with me. No one is going to agree there shouldn't be shop tags anymore because they don't follow "namespace logic." Not that shop=scooter does anyway though, even if it was an actual thing and not just something you probably made up just to be retaliatory. --Adamant1 (talk) 03:38, 12 March 2020 (UTC)
- Wow, sure got your Way with the edit warring and other crap didnt you? And to think I'm suppose to be the dumb one here ;)
When to mark edits as "minor"
Perhaps you accidentally have your wiki settings such that all your edits are "minor"? Please check your preferences and fix this: click on "Preference", then on the "Editing" tab and uncheck the box that says "Mark all edits minor by default". It is inappropriate to mark edits as "minor" when they revert changes or when they make signicant changes to an article. --Jeisenbe (talk) 15:57, 14 March 2020 (UTC)
- This issue has not yet been addressed. Please do not mark your changes as "minor" when they make significant additions or subtractions from a page. --Jeisenbe (talk) 13:17, 21 April 2020 (UTC)
- @Rtfm: The change at Key:company, which effectively undid all changes on that page is definitely not minor (Special:Diff/1997399/1998320). Edits undoing changes should be never tagged as minor. If you are not sure, please do not tag any edits as minor. This pattern occurred multiple times recently (Tag:shop=trailer, Tag:amenity=music_venue). Please address this issue now. I will block you for a week, because this needs to change. --Tigerfell
(Let's talk) 12:49, 8 June 2020 (UTC)
Redirects
Using redirects wrongly seems to be a repeated issues. So, when do you think it is correct and useful to use a redirect and when isn't?
Tantrum throwing
It's pretty clear you don't the patience or ability to interact with other users without getting upset and lashing out. It's seriously getting in the way of the quality of the Wiki. Both your tantrums, along with the repeatedly bad edits and edit warring aren't improving the Wiki any. Maybe you should take a break from editing for awhile. It seems like your way to personally invested in this. So, it might be a good idea to step back from it until you can handle things more appropriately. --Adamant1 (talk) 20:34, 13 April 2020 (UTC)
shop=motorcycle table
Its utterly pointless and just clutters the article to have a table for one type of tagging. Especially when 99% of what's in the table isn't helpful. I could give a crap about tables in other articles. Saying I can't make one article more easily readable because others aren't is circular deflective garbage. There's plenty of examples of tables in the wiki that add the other worthless cruft yours does. Maybe take an example from one of them, or better yet just skip the table. Its completely uncessary and doesn't add anything useful. Everyone knows what a damn tire looks like and what the tire means. Its not helpful to have a visual aid for things people aren't going to be confused about. Also, that table in particular is worthless because it just lists tags already listed in the article above it. You dont need to list the same tags multiple times in the same article. Its not helpful to anyone. Seriously.
- That's your personal point of view. But if obviously the majority sees it as useful. I'd call that a "general agreement" or consensus. I already created a page about "general wiki principles" where it could be documented when to use a table and when not, but this has also be vandalised (by a redirect which points to a page which doesn't explain these topics) rtfm Rtfm (talk) 11:49, 16 April 2020 (UTC)
Request regarding wiki discussions
As you might have noticed, I asked user Adamant1 to refrain from editing any pages they have edited between 1 and 16 April 2020, except for their talk page and my talk page. Additionally, I asked them not to comment on edits made by you and not to edit a page that you will edit, except for their talk page, until 16 June 2020.
I would like to ask you in turn, not to initiate any discussion with them for this time span, too. --Tigerfell
(Let's talk) 19:50, 17 April 2020 (UTC)
- Thank you very much for your intervention, I will certainly avoid any confrontation rtfm Rtfm (talk) 20:01, 17 April 2020 (UTC)
- Well, looking at Talk:Tag:shop=vehicles, I see that you initiated exactly this kind of discussion I asked you to avoid. You wrote:
But you both [Jeisenbe, Adamant1] seem to ignore this fact. I'm not sure about the reason, don't you map enuogh or is it an act of sabotage ?(insertion of user names for clarification) Your statement sounds provoking to me. This kind of provocation needs to stop. Please take a few days off from wiki editing and reconsider your statement. --Tigerfell
(Let's talk) 10:00, 25 April 2020 (UTC)
- So how would you call it to sabotage nearly all my edits, for example the recent removal of a link to the opening hours tool ? I don't think it's appropriate that I need to stay polite if some users mostly act destructive. It's something else if it's just about point of views (regarding formatting of a page or similar) rtfm Rtfm (talk) 11:55, 25 April 2020 (UTC)
- Edits are not being sabotaged. I did not revert the edits which added the links to the external opening_hours tool, but merely removed the links and opened a discussion in the Talk page for the affected pages, as well as here (below). Another 2 users have agreed with me, both here and at another page. Sometimes we all make mistakes by adding or removing things from wiki pages without getting community consensus, so all of us sometimes have our edits changed or reverted. This happens more often if changes are made without discussing them first, or without checking first how tags are being used in practice. --Jeisenbe (talk) 18:39, 25 April 2020 (UTC)
- Exactly this "merely removed" is freakin' me out meanwhile. If there's no alternative which makes sense and there's no '"cludder" as of the information, either replace it by a better way to describe it or let the info in. But solely using the "Del" key as often as possible doesn't make the wiki more "readable" (unless I'd really appreciate it to be simple). Please discuss before removing things, if it's just as you don't understand the reason for the info. Especially in the opening_hours example : try to imagine your mum should understand it. It's similar to any user who just wants to edit his "own" data (such as a shop or restaurant). BTW: there were also users congratulating me for the tool link as it's helpful. And I'm not wondering that among potentional millions of users there might be some naysayers. If they are against, they should propose a better alternative. Just opposing is a no go rtfm Rtfm (talk) 20:07, 25 April 2020 (UTC).
Another example where you removed helpful info (and had no clue what you're doing) : --> since when do offices not have a reception ? rtfm Rtfm (talk) 22:23, 25 April 2020 (UTC)
Opening_hours tool links
- I don't think we should have a link to the opening_hours tool [3] on every shop page. Mappers can be expected to follow the link to opening_hours=* to learn about the format. Similarly, we don't give a long explanation about all the different Addr tags, or the different names tags, on each feature page. --Jeisenbe (talk) 01:21, 24 April 2020 (UTC)
- I agree with User:Jeisenbe. --Nakaner (talk) 07:12, 24 April 2020 (UTC)
- Certainly the opening_hours page could be cleaned up alternatively. But affected shops (and similar) nearly don't got a chance to easily figure out how to edit their opening hours (at the moment). And as long as easy frontends (like) are rare, there should be a possibility to check the syntax format without reading manuals for half an hour. That would raise the acceptance of OSM edits by those whose data is affected. rtfm Rtfm (talk) 10:24, 24 April 2020 (UTC)
please stop trying to hide your failed fraud
Your false redirect at EN:Key:motorcycle friendly (attempt to override Key:motorcycle friendly) is yet another attempt to handle a failed voting fraud. Please stop trying to hide this attempted (and failed) voting fraud documented at Proposed features/motorcycle friendly/tag description.
I would really forgot about this long time ago without your continued attempt to erase it, and everyone is doing sometimes stupid things.
Please, stop reminding everyone about this attempted fraud. Everyone will forget sooner or later, but reminding everyone about it will not help. Mateusz Konieczny (talk) 22:51, 29 April 2020 (UTC)
Requested ban
I opened requesting banning you due to your legal threats and other undesirable activity Mateusz Konieczny (talk) 14:24, 30 April 2020 (UTC)
- That's the right way to really make me do this : rtfm Rtfm (talk) 14:57, 30 April 2020 (UTC)
Cooling down period
Dear RTFM, there have been several complaints about your behaviour recently, including requests for banning you from the Wiki. A cursory check of your User- and Talk-page show quite a few cases of combative language. I don't really have much time to look deeper into this right now. So I ask you to please refrain from editing on this Wiki for this week except your User- and Talk-page, and those two please without adding any insults, name-calling or similar language. This will hopefully cool down tempers on all sides and give me some time to find out what this is all about. Thank you for your cooperation. --Lyx (talk) 22:16, 3 May 2020 (UTC)
- Dear Lyx, this is an example why I talk about "sabotage" : - If this isn't making it worse by purpose, I don't know what else. Stumbled on it as someone with common sense edited the description in the wiki rtfm Rtfm (talk) 10:49, 14 May 2020 (UTC)
- @Lyx Another example, food trucks and similar re-defined : (for hundreds which were tagged as shop=street_vendor. Similar with several (standardised) address blocks in the wiki : . Makes no sense at all, so I'd call this sabotage. user:rtfm Rtfm (talk) 19:27, 5 June 2020 (UTC)
- @Rtfm: For mapping issues please refer to Data Working Group. Looking at the history of Tag:shop=deli, I struggle to see any sabotage. You replaced wiki text with a template and Mateusz Konieczny disliked the replacement and undid it. I already had a discussion with him about the use of this template about a year ago. When analysing the user's contributions tagged as "undo", I noticed that he undid quite a lot of changes but not limited to your edits.
- Please avoid commenting on Adamant1's contributions until 16 June as I have asked you on 17 April. Thanks! --Tigerfell
(Let's talk) 12:05, 6 June 2020 (UTC)
- Probably it would be wiser to create a page about wiki standardisation instead of several years of discussion. Especially as some discussions remind me of "alternative facts" user:rtfm Rtfm (talk) 12:56, 7 June 2020 (UTC)
- Well, first one needs to propose and discuss a standard, because one needs supporters willing to follow the standard. --Tigerfell
(Let's talk) 13:21, 7 June 2020 (UTC)
Article link
Can you provide a link to the article you cited?
- That's a bit diffcult to answer without context, but I assume you mean this one : structural problems rtfm Rtfm (talk) 17:23, 22 August 2020 (UTC)
- It shouldn't be. You accused specific editors of being paid actors because of an article as a justification to get your way on something. So, there should be an article that specifically calls those specific editors paid actors. Otherwise, it's just a baseless, false accusation. An article about general problems in OSM that doesn't mention the specific people your making the accusation about doesn't cut it. Otherwise, anyone could use it to accuse anyone, including you, of pretty much anything. So, either you have specific evidence about the exact users your making the claims about or you should stop making them.
- That said, plenty of people involved in the project (probably most of them), benefit from it financially in some way. That's how it works with a lot of platforms like this one and there's zero wrong with that. Even Woodpeck, who constantly goes off about commercial interests being involved in OSM, owns a company selling shape files and therefore benefits financially from the project. So, attacking people for it is completely ignorant as to how things work. Especially as a way to attack us for our issues with what your doing. It's totally ridiculous to claim any of the basic edits your making would have any effect on anything, financially or otherwise. Let alone would anyone be wasting their money paying anyone to edit war you. What you've done has had zero impact on the project at all, period. That's just a fact. Even the stuff we don't get in arguments with you about. --Adamant1 (talk) 04:45, 23 August 2020 (UTC)
Please stop manipulations '346 237 usages of "capacity" is "noone" ?' while readding capacity:seats=* (not capacity=*) is not OK, especially as capacity:seats=* has 178 uses.
Please stop misleading edit descriptions (especially as it will not help to hide real changes, people learned to not trust what you put as edit description) Mateusz Konieczny (talk) 06:53, 24 September 2020 (UTC)
- In you again used misleading edit description Mateusz Konieczny (talk) 09:25, 6 November 2020 (UTC)
Recreating without any mention of severe venerability problems is also not OK Mateusz Konieczny (talk) 12:13, 4 December 2020 (UTC)
Please review how a tag is used before creating a new page to document it
The new pages Tag:office=university and Tag:office=healthcare appear to have been created without investigating the actual usage of the tag in the OpenStreetMap database.
Please review how a tag is being used in at least a couple of countries, especially when it has only been used 100 or 200 times, before documenting it. A tag: or Key: page, unlike a Proposal, is a documentation of how things are actually done. This requires research. If you wish to propose tags, you may do this at say Proposed_features/Tag:office=university which will make it clear that the description is a suggestion. --Jeisenbe (talk) 06:39, 19 November 2020 (UTC)
- This shows once again that you're only holding thigs up instead of improving them. I'm still uncertain if by purpose, but I assume so. In case you had another logic, describe it in the wiki. But to ignore that there are in fact offices in universities as also in hospitals and similar healthcare institutions is in the best case ignorant, if not sabotage of the wki logic. rtfm Rtfm (talk) 10:52, 19 November 2020 (UTC)
- Of course there are offices in universities. I'm asking that if you want to document the tag office=university, please check how this tag is being used in more than one country. Your documentation suggested that it was a way to tag a university faculty (aka department or division), but more often it is used for administrative or services offices, e.g. student services, registrars etc, and the way you wrote the new page did not mention this usage. --Jeisenbe (talk) 22:22, 19 November 2020 (UTC)
- replied at Talk:Tag:office=university rtfm Rtfm (talk) 22:40, 21 November 2020 (UTC)
Sales Etc. Etc. namespace proposal
I was under theNamespace impression that you abandoned your namespace proposal because no one wanted to do it. If not, you should take the idea to a vote so we can either implement the thing if that's what the community wants or be done with it. In the meantime though, adding tags abandoned tags that no one wants to use to important main pages like Namespace as examples of "standard ways of doing things" when they aren't isn't helpful or an improvement on anything. So, you should desist from doing it. Either take your proposed tags to a vote or stop posting them everywhere. Especially in places where they clearly don't belong. --Adamant1 (talk) 01:58, 25 November 2020 (UTC)
(Especially considering the alternative, electronics_repair=*, has like 575 uses and is therefore clearly the more accepted tag for that) --Adamant1 (talk) 14:33, 5 December 2020 (UTC)
If marking pages as approved please link proposal
re
In case of marking tag as approved always add also link to proposal where it happened. Apply your username and read documentation of relevant templates how to do this Mateusz Konieczny (talk) 13:17, 26 December 2020 (UTC)
- Nice to know, but it wasn't me. Please stop spamming. rtfmRtfm (talk) 17:19, 26 December 2020 (UTC)
- IN you modified dirtbike:scale page to mark status as approved. What is the relevance of ? Mateusz Konieczny (talk) 19:23, 26 December 2020 (UTC)
Dealing with criticism of your tag inventions
You have invented a couple of tags that have been criticised by other users, such as shop=street_vendor where users noted that the use of this tag prevents the specification of what that shop actually sells, and that alternative tags exist. I might add that this tag breaks the inherent logic of the shop=* key, because I guess you can't buy street_vendors at that shop. Your reaction to criticism so far seems to be to sneak references to your tags into the documentation of other tags, so unsuspecting users might use them despite the problems (that they would not know about unless they lookup the tag page). At least in the recent case of adding your tag to shop=jetski you also used a disparaging changeset comment. This is a very unproductive way of doing things that helps no-one (yourself included) and does nothing to improve OSM. Please deal with criticism by commenting on the factual points made by others, try to convince them, and be open to the possibility that in some cases others might have the better arguments. The goal of this wiki is to document OSM tagging and together find ways of improving it. Who invented what tagging is totally irrelevant and will be forgotten over time anyways. --Lyx (talk) 19:59, 28 December 2020 (UTC) | https://wiki.openstreetmap.org/wiki/User_talk:Rtfm | CC-MAIN-2021-04 | refinedweb | 15,710 | 68.4 |
The variable arguments passed to a variadic function are accessed by calling the
va_arg() macro. This macro accepts the
va_list representing the variable arguments of the function invocation and the type denoting the expected argument type for the argument being retrieved. The macro is typically invoked within a loop, being called once for each expected argument. However, there are no type safety guarantees that the type passed to
va_arg matches the type passed by the caller, and there are generally no compile-time checks that prevent the macro from being invoked with no argument available to the function call. The C Standard, 7.16.1.1, states [ISO/IEC 9899:2011], in part:and the other is a pointer to a character type.
Ensure that an invocation of the
va_arg() macro does not attempt to access an argument that was not passed to the variadic function. Further, the type passed to the
va_arg() macro must match the type passed to the variadic function after default argument promotions have been applied. Either circumstance results in undefined behavior.
Noncompliant Code Example
This noncompliant code example attempts to read a variadic argument of type
unsigned char with
va_arg(). However, when a value of type
unsigned char is passed to a variadic function, the value undergoes default argument promotions, resulting in a value of type
int being passed.
#include <stdarg.h> #include <stddef.h> void func(size_t num_vargs, ...) { va_list ap; va_start(ap, num_vargs); if (num_vargs > 0) { unsigned char c = va_arg(ap, unsigned char); // ... } va_end(ap); } void f(void) { unsigned char c = 0x12; func(1, c); }
Compliant Solution
The compliant solution accesses the variadic argument with type
int, and then casts the resulting value to type
unsigned char:
#include <stdarg.h> #include <stddef.h> void func(size_t num_vargs, ...) { va_list ap; va_start(ap, num_vargs); if (num_vargs > 0) { unsigned char c = (unsigned char) va_arg(ap, int); // ... } va_end(ap); } void f(void) { unsigned char c = 0x12; func(1, c); }
Noncompliant Code Example
This noncompliant code example assumes that at least one variadic argument is passed to the function, and attempts to read it using the
va_arg() macro. This pattern arises frequently when a variadic function uses a sentinel value to denote the end of the variable argument list. However, the caller passes no variadic arguments to the function, which results in undefined behavior.
#include <stdarg.h> void func(const char *cp, ...) { va_list ap; va_start(ap, cp); int val = va_arg(ap, int); // ... va_end(ap); } void f(void) { func("The only argument"); }
Compliant Solution
Standard C provides no mechanism to enable a variadic function to determine how many variadic arguments are actually provided to the function call. That information must be passed in an out-of-band manner. Oftentimes this results in the information being encoded in the initial parameter, as in this compliant solution:
#include <stdarg.h> #include <stddef.h> void func(size_t num_vargs, const char *cp, ...) { va_list ap; va_start(ap, cp); if (num_vargs > 0) { int val = va_arg(ap, int); // ... } va_end(ap); } void f(void) { func(0, "The only argument"); }
Risk Assessment
Incorrect use of
va_arg() results in undefined behavior that can include accessing stack memory.
Automated Detection
Related Vulnerabilities
Search for vulnerabilities resulting from the violation of this rule on the CERT website.
2 Comments
Oleg Omelyusik
In the last compliant solution function is called with invalid arguments order:
func("The only argument", 0);
but should be:
func(0, "The only argument");
David Svoboda
Fixed, thanks. | https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87151991 | CC-MAIN-2019-22 | refinedweb | 568 | 54.83 |
We are using Portable Batch System Pro (PBSPro) version 5.0.6. The home page for the PBSPro is at.
The main components are the Server (pbs_server), Scheduler (pbs_sched) and the job executor (pbs_mom, otherwise known as Machine Oriented Mini-Server - MOM).
Currently in DSD, slappy and diesel are the two machines running PBS. They run PBS independently, in the sense that each have all of the three components. This means the jobs submitted to each of them are queued and executed in the same machine.
At the moment, we only allow batch jobs in these machines; no interactive jobs are allowed. Jobs are submitted, using qsub command, through job scripts to queues. We have default, small, medium, long, verylong queues. Based on the CPU time needed for running the job, as specified at the time of submitting the job, the job is moved on to one of the four execution queues.
To access PBS commands and man pages you need the following :
setenv PATH /usr/grid/pbs/bin:$PATH setenv MANPATH /usr/grid/pbs/man:$MANPATHIf you don't find these entries already in your $PATH and $MANPATH, just do a module load pbs. Edit your .cshrc or .login file to run this command everytime you login.
There are several things you should know about setting up your shell
prompt and environment for use under PBS:
When PBS runs a job, it runs the jobs with your user priviledges your shell automatically executes
If you still want to customize your environment, you can use the PBS
environment variable ($PBS_ENVIRONMENT) to check if you are
running under PBS. This environment variable is only defined when
running under PBS. The variable takes two values:
PBS_INTERACTIVE (interactive jobs) or PBS_BATCH (batch
jobs). For example you may want to enclose any terminal or prompt
setup in your init files as follows ("csh" sample):
# # Normal setup executed only when we are not under PBS # if ( ! "$?PBS_ENVIRONMENT" ) then do terminal stuff here run setup commands with output here Your local setup for the prompt setenv prompt myprompt Your local Terminal setup such as (stty, resize, etc) stty delete mydelete endif #
While there is not a 100% overlap, many of the parameters used to specify resource requirements in PBS have a direct corresponding resource parameter in Globus' RSL. The following table shows the parameters available at this time.
Globus reports the status of jobs with various "states" a job goes through. This is similar to PBS processing states. The following table shows these correspondences. | http://web.archive.org/web/20120701103826/http:/doesciencegrid.org/public/pbs/ | CC-MAIN-2013-48 | refinedweb | 418 | 60.35 |
Thoughts of Keyboard.io
I was an original backer of the keyboard.io Kickstarter, and spent months and then years on a roller coaster ride with Jesse and Kaia and their ambitious plan to craft the ultimate keyboard. As a serious knerd (keyboard nerd) that has a few too many mechanical keyboards in his office, I couldn’t wait to use it.
While all projects take longer than expected, good ones where the craftsman cares about the quality, require even more time. Now that I am finally typing this review on the keyboard, I am quite pleased with the results.
Product Quality
The combination of the switches and the wood enclosure gives the keyboard an amazingly solid feel. A simple gaze at the website will show you that the keyboard has a unique design, so it took a week to hone the angle and placement of the halves to fit my typing style. I initially assumed that I wanted them splayed at an angle that tented in the middle, but having the outside edges tilted up actually worked the best for me. Kudos to the unexpected, but ingenious design to have each half float on angled pads.
Support
The Model 1 comes in two flavors … loud and soft. As a long term Kinesis user, I appreciate the springiness typically associated with louder switches, but since I wanted one for my office, I decided to order one of each.
However, when it came time to fill in the form, I accidentally checked two loud keyboards, and didn’t realize this until the second had arrived. At first, I was confused and started with questions on the nice Community Forum. I suppose I shouldn’t be surprised that Jesse immediately contacted me and worked with me to get a different model. I couldn’t be happier with the experience.
Hacking
I love products that support and encourage hacking and personalizing, and
this is one of the best keyboards for this. I actually got along for a month
or so before I felt compelled to hack…that and I wasn’t sure if I wanted a
OneShot-based program where the modifier keys were sticky (allowing me to
type and release the Control and then the
c to copy), or if wanted to double
the modifiers so that if they were pressed and released on their own, they
would send other keys to the system.
Eventually, I settled on a OneShot and then programmed the Butterfly and other keys for values that Emacs could use…but more about that later.
Along with fostering a community, they’ve been working hard at providing a rich series of tutorials for hacking aimed at non-programmers. Customizing involves downloading code and modules and then uploading this into the keyboard from an Arduino editor, and while this may appear daunting for some, I thought the tutorials were detailed enough, that I would recommend at least trying.
Issues and Concerns
This keyboard won’t be for everyone, so I’ll state a few concerns that people should be aware.
First, I realize that keycaps have their own subculture within the knerd community, and with this design, you can’t swap out the keycaps you grab from Massdrop. I found the supplied keycaps to be solid and attractive (especially since I’m more focused on programming the blinken lights), but I could be tempted if someone were to design a wood-colored keycap set.
Second, the loud version of the keyboard is…well, quite loud. If I’m on a conference call and forget to press the mute button, people duck under the table for fear of gunfire. During the Kickstarter Campaign, I ran into Jesse at OSCON, and he tried to talk me out of the loud keyboard. While I have gotten a bit used to it, I do wish I had purchased two quiet ones (so if anyone out there has one they feel is too quiet, I’ll gladly swap with you).
Third, the separate floating design (the butterfly wings) is great for personalizing it for your hands (and I even adjust the position during the day to minimize the fatigue), this makes it more difficult to transport. The solution for me was just to have two, so I haven’t attempted to throw them in my messenger bag.
Let’s Customize!
Notice that compared to other keyboards, all keys are within a smaller
proximity to the home row keys (where de ol’ fingers rest). To compensate for
what would be a smaller number of key options, this keyboard (and others like
ErgoDox) subscribe to the notion of layers turned on either temporarily with a
modifier key (like the palm placed
Fn key) or toggled (like the
num lock).
On a typical laptop keyboard, the
Fn key is only associated with some keys
(notably the Function keys). This means, you can play/pause your music on a
Macbook with
Fn-
F8. While on the keyboard.io, this is bound to
Fn-
Enter, the
Fn key can work with any other key.
Most of the application-associated keycodes are connected to keys on the right-side of the keyboard, leaving the left side to move the mouse. While moving the mouse seemed like a nice idea, the precision associated to placing the mouse on particular text drove me crazy. Replacing these, means I have a whole side of my keyboard that could be used for global shortcuts.
The problem I encountered was generating key codes my computer’s operating
system recognized. While I tried various ideas, but finally settled on having
Fn-
z send a
z with all the modifier keys pressed, i.e.
Control,
Option/Alt
and
Command:
LGUI(LALT(LCTRL(Key_Z)))
Next, I needed to have the operating system do something interesting with that keycode. Alfred allows you to associate various keybindings with applications, Applescripts and whatnot:
Now instead of using the
Command-
Tab to get to significant apps, I can easily
pop over to my browser, emacs or corporate communications systems. My
current favorite feature, is binding the top-left key, labeled
prog, to
trigger a Applescript that selects the Zoom app, sends it the
Command-
A
sequence to toggle the mute button, and then return to the original app:
set old to (path to frontmost application as text) tell application "zoom.us" activate tell application "System Events" to keystroke "a" using {shift down, command down} end tell activate application old
Even though Zoom doesn’t offer a global way for me to mute the microphone while I’m both talking and taking notes in Emacs, I was able to solve this problem.
Emacs?
When I first saw the key layout for the keyboardio, I felt that this was a very vi-friendly keyboard. Since our collective love-hate relationship with browser-based applications, none of us are able to stay in either our VI or Emacs world exclusively, so having the arrow keys on the VI home row is brilliant. While everyone rebinds Escape to some keychord in Vi, having the Escape key as prominent as Tab or Return on the index finger is also quite nice.
Due to this, my friends and colleagues were shocked when I switched from Emacs to Vi bindings. No, I didn’t switch to Vi, I just use a Vi layer on top of Emacs, joining Dr. Pangloss in the best of all possible worlds.
That said, Jesse and Kaia gave me the greatest hacking temptation… three prominent keys in the center of the keyboard with no good default value: The famous Any key actually generates a random character, the LED cycles the backlit coloring program, and the cute butterfly key generates a Right-Alt.
Binding keys to cool functions is what Emacsians do, so I easily followed these instructions to have these keys send the extended function keys, and then added the following to my Emacs system:
;; Bind the prominent keys on my keyboard.io to useful functions: (global-set-key (kbd "<f16>") 'er/expand-region) (global-set-key (kbd "<f17>") 'special-return) (global-set-key (kbd "<f18>") 'evil-avy-goto-char-timer)
Now, pressing the Butterfly key (that sends the F18 keycode) allows me to quickly jump to any word on the screen just by typing the first few letters of it, using the Avy project.
The
led (F16) allows me to select text syntactically in greater chunks with
repeated taps, using Magmar’s cool expand-region project.
When the region is active (due to the expand-region), the
any key (that is
just across from the
led) acts in the opposite way, and shrinks the region to
the previous size. If the region is not active, then the
any key (next to the
enter key) acts like a context-aware return key.
The code for that
special-return is:
(defun special-return () "Fancy return bound to a almost-return key. If in org-mode, this inserts a new line of the same type as the current line (if making a list, it makes a new list, etc). Otherwise, this inserts a new blank line. Note: If the region is active, this use the expand-region package to shrink the region, essentially making an expand region opposite key." (interactive) (if (region-active-p) (er/contract-region 1) (if (equal major-mode 'org-mode) (ha/org-special-return) (newline-for-code))))
Clearly, it doesn’t do much as it just calls out to other functions.
The
newline-for-code is a simply function that allows me to hit return, but
not split the current line:
(defun newline-for-code () "Inserts a newline character, but from the end of the current line." (interactive) (move-end-of-line 1) (newline-and-indent))
However, the
org-special-return is far more complicated, for within
org-mode
files:
- If I’m currently entering a list of items, it creates a new list item element
- If I’m entering a header, it creates a new header
- If I’m in a table, it enters a new row, etc.
For the details of this function, check out my git repository.
Obviously, what I think is helpful and useful now, will change over time… but all the better to have a hackable keyboard that doesn’t require a special app to be installed in order to generate special features.
Summary
I really, really like this keyboard, and I am very glad that I purchased two for both home and office. Now that production runs are finalizing deliverables to Kickstarter Backers, I would encourage you to consider getting one.
Let me know if you have any questions, and I’ll try to answer them. | http://www.howardism.org/Technical/Other/keyboardio-review.html | CC-MAIN-2018-17 | refinedweb | 1,780 | 55.17 |
Suppose there is a gold mine somewhere in a jungle and you are standing outside the jungle. There are 10 different paths which are going to the jungle, out of which only one path is going to lead to the mine. What would you do?
There is no option other than checking every path until the mine is found. So, you will start from the first path and if the mine is not found using this path, you will come back to take the second path and so on. This is backtracking, you just backtracked to the origin to take a different path when the mine was not found.
There can be more than one path leading to the mine. In that case, we use backtracking to find just one solution or more solutions depending upon the necessity of the problem.
Think about the problems like finding a path in a maze puzzle, assembling lego pieces, sudoku, etc. In all these problems, backtracking is the natural approach to solve them because all these problems require one thing - if a path is not leading you to the correct solution, come back and choose a different path.
Thus, we start with a sub-solution of a problem (which may or may not lead to the correct solution) and check if we can proceed further with this sub-solution or not. If not, then we just change this sub-solution. So, the steps involved are
- Start with a sub-solution.
- Check if this sub-solution will lead to a solution or not.
- If not, then change the sub-solution and continue again.
Take note that even tough backtracking solves the problem but yet it doesn't always give us a great running time. For example, you will see factorial running time in many cases with backtracking but yet we can use it to solve problems with small size (like most of the puzzles).
Let's get our hands dirty and use backtracking to solve N-Queens problem.
N Queens Problem
N queens problem is one of the most common examples of backtracking. Our goal is to arrange N queens on an NxN chessboard such that no queen can strike down any other queen. A queen can attack horizontally, vertically, or diagonally.
So, we start by placing the first queen anywhere arbitrarily and then place the next queen in any of the safe places. We continue this process until the number of unplaced queens becomes zero (a solution is found) or no safe place is left. If no safe place is left, then we change the position of the previously placed queen.
Let's test this algorithm on a 4x4 chessboard.
Using Backtracking to Solve N Queens
The above picture shows a 4x4 chessboard and we have to place 4 queens on it. So, we will start by placing the first queen in the first row.
Now, the second step is to place the second queen in a safe position. Also, we can't place the queen in the first row, so we will try putting the queen in the second row this time.
Let's place the third queen in a safe position, somewhere in the third row.
Now, we can see that there is no safe place where we can put the last queen. So, we will just change the position of the previous queen i.e., backtrack and change the previous decision.
Also, there is no other position where we can place the third queen, so we will go back one more step and change the position of the second queen.
And now we will place the third queen again in a safe position other than the previously placed position in the third row.
We will continue this process and finally, we will get the solution as shown below.
After understanding the backtracking and N queens problem, let's write the code for it.
Code for N Queens
Let's first write a function to check if a place is safe to put a queen or not.
We need to check if a cell (i, j) is under attack or not. For that, we will pass these two in our function along with the chessboard and its size -
IS-ATTACK(i, j, board, N).
If there is a queen in a cell of the chessboard, then its value will be 1, otherwise, 0.
The cell (i,j) will be under attack in three condition - if there is any other queen in row i, if there is any other queen in the column j or if there is any queen in the diagonals.
We are already proceeding row-wise, so we know that all the rows above the current row(i) are filled but not the current row and thus, there is no need to check for row i.
We can check for the column j by changing k from 1 to i-1 in
board[k][j] because only the rows from 1 to i-1 are filled.
for k in 1 to i-1
if board[k][j]==1
return TRUE
Now, we need to check for the diagonal. We know that all the rows below the row i are empty, so we need to check only for the diagonal elements which above the row i.
If we are on the cell (i, j), then decreasing the value of i and increasing the value of j will make us traverse over the diagonal on the right side, above the row i.
k = i-1
l = j+1
while k>=1 and l<=N
if board[k][l] == 1
return TRUE
k=k-1
l=l+1
Also if we reduce both the values of i and j of cell (i, j) by 1, we will traverse over the left diagonal, above the row i.
k = i-1
l = j-1
while k>=1 and l>=1
if board[k][l] == 1
return TRUE
k=k-1
l=l-1
At last, we will return false as it will be return true is not returned by the above statements and the cell (i,j) is safe.
We can write the entire code as:
IS-ATTACK(i, j, board, N) // checking in the column j for k in 1 to i-1 if board[k][j]==1 return TRUE // checking upper right diagonal k = i-1 l = j+1 while k>=1 and l<=N if board[k][l] == 1 return TRUE k=k+1 l=l+1 // checking upper left diagonal k = i-1 l = j-1 while k>=1 and l>=1 if board[k][l] == 1 return TRUE k=k-1 l=l-1 return FALSE
Now, let's write the real code involving backtracking to solve the N Queen problem.
Our function will take the row, number of queens, size of the board and the board itself -
N-QUEEN(row, n, N, board).
If the number of queens is 0, then we have already placed all the queens.
if n==0
return TRUE
Otherwise, we will iterate over each cell of the board in the row passed to the function and for each cell, we will check if we can place the queen in that cell or not. We can't place the queen in a cell if it is under attack.
for j in 1 to N
if !IS-ATTACK(row, j, board, N)
board[row][j] = 1
After placing the queen in the cell, we will check if we are able to place the next queen with this arrangement or not. If not, then we will choose a different position for the current queen.
for j in 1 to N
...
if N-QUEEN(row+1, n-1, N, board)
return TRUE
board[row][j] = 0
if N-QUEEN(row+1, n-1, N, board) - We are placing the rest of the queens with the current arrangement. Also, since all the rows up to 'row' are occupied, so we will start from 'row+1'. If this returns true, then we are successful in placing all the queen, if not, then we have to change the position of our current queen. So, we are leaving the current cell
board[row][j] = 0 and then iteration will find another place for the queen and this is backtracking.
Take a note that we have already covered the base case -
if n==0 → return TRUE. It means when all queens will be placed correctly, then
N-QUEEN(row, 0, N, board) will be called and this will return true.
At last, if true is not returned, then we didn't find any way, so we will return false.
N-QUEEN(row, n, N, board)
...
return FALSE
N-QUEEN(row, n, N, board) if n==0 return TRUE for j in 1 to N if !IS-ATTACK(row, j, board, N) board[row][j] = 1 if N-QUEEN(row+1, n-1, N, board) return TRUE board[row][j] = 0 //backtracking, changing current decision return FALSE
- C
- Python
- Java
#include <stdio.h> int is_attack(int i, int j, int board[5][5], int N) { int k, l; // checking for column j for(k=1; k<=i-1; k++) { if(board[k][j] == 1) return 1; } // checking upper right diagonal k = i-1; l = j+1; while (k>=1 && l<=N) { if (board[k][l] == 1) return 1; k=k+1; l=l+1; } // checking upper left diagonal k = i-1; l = j-1; while (k>=1 && l>=1) { if (board[k][l] == 1) return 1; k=k-1; l=l-1; } return 0; } int n_queen(int row, int n, int N, int board[5][5]) { if (n==0) return 1; int j; for (j=1; j<=N; j++) { if(!is_attack(row, j, board, N)) { board[row][j] = 1; if (n_queen(row+1, n-1, N, board)) return 1; board[row][j] = 0; //backtracking } } return 0; } int main() { int board[5][5]; int i, j; for(i=0;i<=4;i++) { for(j=0;j<=4;j++) board[i][j] = 0; } n_queen(1, 4, 4, board); //printing the matix for(i=1;i<=4;i++) { for(j=1;j<=4;j++) printf("%d\t",board[i][j]); printf("\n"); } return 0; }
Analysis of N Queens Problem
The analysis of the code is a little bit tricky. The for loop in the N-QUEEN function is running from 1 to N (N, not n. N is fixed and n is the size of the problem i.e., the number of queens left) but the recursive call of
N-QUEEN(row+1, n-1, N, board) ($T(n-1)$) is not going to run N times because it will run only for the safe cells. Since we have started by filling up the rows, so there won't be more than n (number of queens left) safe cells in the row in any case.
So, this part is going to take $n*T(n-1)$ time.
Also, the for loop is making N calls to the function IS-ATTACK and the function has a $O(N-n)$ worst case running time.
Since $(N-n) \leq N$, therefore, $O(N-n) = O(N)$.
Thus, $$ T(n) = O(N^2) + n*T(n-1) $$
Replacing $T(n-1)$ with $O(N^2) + (n-1)T(n-2)$, $$ T(n) = O(N^2) + n*\left(O(N^2) + (n-1)T(n-2)\right) $$ $$ = O(N^2) + nO(N^2) + n(n-1)T(n-2) $$
Replacing $T(n-2)$, $$ T(n) = O(N^2) + nO(N^2) + n(n-1)\left(O(N^2)+(n-2)T(n-3)\right) $$ $$ = O(N^2) + nO(N^2) + n(n-1)O(N^2)+n(n-1)(n-2)T(n-3) $$
Similarly, $$ T(n) = O(N^2) \left(1+ n + n(n-1) + n(n-2) + ...\right) + n*(n-1)*(n-2)*(n-3)*(n-4)*....*T(0) $$ $$ T(n) = O(N^2) \left(O((n-2)!)\right) + n*(n-1)*(n-2)*(n-3)*....*T(0) $$ $$ = O(N^2) \left(O((n-2)!)\right) + O(n!) $$
The above expression is dependent upon both the size of the board (N) and the number of queens (n). One can think that the term $O(N^2) \left(O((n-2)!)\right)$ will dominate if N is large enough but this is not going to happen.
Think about placing 1 queen on a 4x4 chessboard. Even if the size of the board (N) is quite greater than the number of queen (n), the algorithm will just find a place for the queen and then terminate (
if n==0 → return TRUE). So it is not going to depend on N and thus, the running time will be $O(n!)$.
Another case is when the term $O(n!)$ will dominate, i.e., when the number of queens is larger than N, this will happen when there won't be any solution. In this case, the algorithm will take $O(n!)$ time.
In our case, the number of queens is also equal to N. In this case, we can write $O(N^2) \left(O((n-2)!)\right)$ as $ \left(O((n-2)!*n*n)\right)$ which is just $\frac{n}{n-1}$ times larger than $n!$ as $\left(\frac{(n-2)!*n*n}{n*(n-1)*(n-2)!} = \frac{n}{n-1}\right)$. According to the definition of Big-Oh, we can choose the value of constant $c \gt \frac{n}{n-1}$
$\left(f(n) = O(g(n)),\,if\,f(n)\le c.g(n)\right)$
and thus, say $O(N^2) \left(O((n-2)!)\right) = O(n!)$. You can use our discussion forum to get your doubt cleared.
So by analyzing the equation, we can say that the algorithm is going to take $O(n!)$.
Take a note that this is an optimized version of backtracking algorithm to implement N-Queens (no doubts, it can be further improved). Backtracking - Explanation and N queens problem article has the non-optimized version of the algorithm, you can compare the running time of the both. | https://www.codesdope.com/course/algorithms-backtracking/ | CC-MAIN-2022-40 | refinedweb | 2,353 | 76.96 |
Sadly, in the name of progress, we have polluted the air, water, soil and the food we eat”..
In this tutorial, we are going to show you that how to sense TVOC and CO2 using CCS811 air quality sensor with Arduino. Also, you will learn to interface CSS811 with Arduino.
Material Required
- Arduino UNO
- CCS811 Air Quality Sensor
- Potentiometer (10k)
- LCD 16*2
- Breadboard
- Connecting Wires
Circuit Diagram
CCS811 Air Quality Sensor
CCS811 Air Quality Sensor is an ultra-low power digital gas sensor which integrates a MOX (metal oxide) gas sensor to detect a wide range of VOCs (Volatile Organic Compounds) for indoor air quality monitoring with an integrated MCU (Micro-controller Unit). MCU consists of ADC (Analog-to-Digital Converter) and I2C interface. It’s based on an ams unique micro-hotplate technology which empowers highly reliable solutions for Gas Sensors, with low power consumption.
In our circuit, we are using this sensor for sensing TVOC and CO2 available in the environment and displaying the data on 16*2 LCD.
Pin Configuration
Application
- Smartphones
- Wearables
- Home and Building Automation
- Accessories
Code and Explanation
The complete Arduino code for TVOC and CO2 Measurement using CCS811 Air Quality Sensor is given at the end.
In the below code, we are defining the libraries for 16*2 LCD and CCS811 Air Quality Sensor. For downloading the library “Adafruit_CCS811.h” for CCS811 follow this link.
#include <LiquidCrystal.h> #include "Adafruit_CCS811.h"
Below we have defined Pins for connection of 16*2 LCD with Arduino.
LiquidCrystal lcd(12, 13, 8, 9, 10, 11); /// REGISTER SELECT PIN,ENABLE PIN,D4 PIN,D5 PIN, D6 PIN, D7 PIN Adafruit_CCS811 ccs;
Below we have set up LCD and CCS811 air quality sensor and calibrated it for the showing correct temperature, as shown in the below code,
void setup() { lcd.begin(16, 2); ccs.begin(); //calibrate temperature sensor while(!ccs.available()); float temp = ccs.calculateTemperature(); ccs.setTempOffset(temp - 25.0); }
In the below code, we used functions “ccs.available()” (Function is already defined in library) to check if there is some data coming. As we get the data we are able to calculate the temperature and display it on 16*2 LCD.
Further if CCS is available and ccs.readData() is returning false then we get the CO2 value using function ccs.geteCO2() and TVOC value using ccs.getTVOC(), as shown in the below code. Hence, we have received the value of air quality parameters using CCS811 air quality sensor.
void loop() { if(ccs.available()){ float temp = ccs.calculateTemperature(); if(!ccs.readData()){ int co2 = ccs.geteCO2(); int tvoc = ccs.getTVOC(); lcd.setCursor(0, 0); lcd.print(String ("CO2:")+ String (co2)+String(" PPM")); lcd.setCursor(0, 1); lcd.print(String ("TVOC:")+ String (tvoc)+String(" PPB ")); lcd.print(String("T:"+String (int(temp)))+String("C")); delay(3000); lcd.clear(); } else{ lcd.print("ERROR"); while(1); } } }
Complete Arduino code is given below. Code is simple, all the work is done by its library itself and we have used functions defined in the CCS library to get the values of CO2 and TOVC.
Read more: TVOC and CO2 Measurement using Arduino and CCS811 Air Quality Sensor | https://duino4projects.com/tvoc-and-co2-measurement-using-arduino-and-ccs811-air-quality-sensor/ | CC-MAIN-2021-04 | refinedweb | 525 | 50.63 |
Nowadays, all the cool kids are doing microservices. Whether or not you care, there ARE some really nice distributed systems patterns that have emerged from this movement. Netflix and others have shared novel solutions for preventing cascading failures, discovering services at runtime, performing client-side load balancing, and storing configurations off-box. For Java developers, many of these patterns have been baked into turnkey components as part of Spring Cloud. But what about .NET devs who want access to all this goodness? Enter Steeltoe.
Steeltoe is an open-source .NET project that gives .NET Framework and .NET Core developers easy access to Spring Cloud services like Spring Cloud Config (Git-backed config server) and Spring Cloud Eureka (service discovery from Netflix). In this blog post, I’ll show you how easy it is to create a config server, and then connect to it from an ASP.NET app using Steeltoe.
Why should .NET devs care about a config server? We’ve historically thrown our (sometimes encrypted) config values into web.config files or a database. Kevin Hoffman says that’s now an anti-pattern because you end up with mutable build artifacts and don’t have an easy way to rotate encryption keys. With fast-changing (micro)services, and more host environments than ever, a strong config strategy is a must. Spring Cloud Config gives you a web-scale config server that supports Git-backed configurations, symmetric or asymmetric encryption, access security, and no-restart client refreshes.
Many Steeltoe demos I’ve seen use .NET Core as the runtime, but my non-scientific estimate is that 99.991% of all .NET apps out there are .NET 4.x and earlier, so let’s build a demo with a Windows stack.
Before starting to build the app, I needed actual config files! Spring Cloud Config works with local files, or preferably, a Git repo. I created a handful of files in a GitHub repository that represent values for an “inventory service” app. I have one file for dev, QA, and production environments. These can be YAML files or property files.
Let’s code stuff. I went and built a simple Spring Cloud Config server using Spring Tool Suite. To say “built” is to overstate how silly easy it is to do. Whether using Spring Tool Suite or the fantastic Spring Initializr site, if it takes you more than six minutes to build a config server, you must be extremely drunk.
Next, I chose which dependencies to add to the project. I selected the Config Server, which is part of Spring Cloud.
With my app scaffolding done, I added a ton of code to serve up config server endpoints, define encryption/decryption logic, and enable auto-refresh of clients. Just kidding. It takes a single annotation on my main Java class:
import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.config.server.EnableConfigServer; @SpringBootApplication @EnableConfigServer public class BlogConfigserverApplication { public static void main(String[] args) { SpringApplication.run(BlogConfigserverApplication.class, args); } }
Ok, there’s got to be more than that, right? Yes, I’m not being entirely honest. I also had to throw this line into my application.properties file so that the config server knew where to pull my GitHub-based configuration files.
spring.cloud.config.server.git.uri=
That’s it for a basic config server. Now, there are tons of other things you CAN configure around access security, multiple source repos, search paths, and more. But this is a good starting point. I quickly tested my config server using Postman and saw that by just changing the profile (dev/qa/default) in the URL, I’d pull up a different config file from GitHub. Spring Cloud Config makes it easy to use one or more repos to serve up configurations for different apps representing different environments. Sweet.
Ok, so I had a config server. Next up? Using Steeltoe so that my ASP.NET 4.6 app could easily retrieve config values from this server.
I built a new ASP.NET MVC app in Visual Studio 2015.
Next, I searched NuGet for Steeltoe, and found the configuration server library.
Fortunately .NET has some extension points for plugging in an outside configuration source. First, I created a new appsettings.json file at the root of the project. This file describes a few settings that help map to the right config values on the server. Specifically, the name of the app and URL of the config server. FYI, the app name corresponds to the config file name in GitHub. What about whether we’re using dev, test, or prod? Hold on, I’m getting there dammit.
{ "spring": { "application": { "name": "inventoryservice" }, "cloud": { "config": { "uri": "[my ip address]:8080" } } } }
Next up, I created the class in the “App_Start” project folder that holds the details of our configuration, and looks to the appsettings.json file for some pointers. I stole this class from the nice Steeltoe demos, so don’t give me credit for being smart.
using System; using System.Collections.Generic; using System.Linq; using System.Web; //added by me using Microsoft.AspNetCore.Hosting; using System.IO; using Microsoft.Extensions.FileProviders; using Microsoft.Extensions.Configuration; using Steeltoe.Extensions.Configuration; namespace InventoryService { public class ConfigServerConfig { public static IConfigurationRoot Configuration { get; set; } public static void RegisterConfig(string environment) { var env = new HostingEnvironment(environment); // Set up configuration sources. var builder = new ConfigurationBuilder() .SetBasePath(AppDomain.CurrentDomain.BaseDirectory) .AddJsonFile("appsettings.json") .AddConfigServer(env); Configuration = builder.Build(); } } public class HostingEnvironment : IHostingEnvironment { public HostingEnvironment(string env) { EnvironmentName = env; } public string ApplicationName { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } public IFileProvider ContentRootFileProvider { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } public string ContentRootPath { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } public string EnvironmentName { get; set; } public IFileProvider WebRootFileProvider { get; set; } public string WebRootPath { get; set; } IFileProvider IHostingEnvironment.WebRootFileProvider { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } } }
Nearly done! In the Global.asax.cs file, I needed to select which “environment” to use for my configurations. Here, I chose the “default” environment for my app. This means that the Config Server will return the default profile (configuration file) for my application.
protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RouteConfig.RegisterRoutes(RouteTable.Routes); //add for config server, contains "profile" used ConfigServerConfig.RegisterConfig("default"); }
Ok, now to the regular ASP.NET MVC stuff. I added a new HomeController for the app, and looked into the configuration for my config value. If it was there, I added it to the ViewBag.
public ActionResult Index() { var config = ConfigServerConfig.Configuration; if (null != config) { ViewBag.dbserver = config["dbserver"] ?? "server missing :("; } return View(); }
All that was left was to build a View to show the glorious result. I added a new Index.cshtml file and just printed out the value from the ViewBag. After starting up the app, I saw that the value printed out matches the value in the corresponding GitHub file:
If you’re a .NET dev like me, you’ll love Steeltoe. It’s easy to use and provides a much more robust, secure solution for app configurations. And while I think it’s best to run .NET apps in Pivotal Cloud Foundry, you can run these Steeltoe-powered .NET services anywhere you want.
Steeltoe is still in a pre-release mode, so try it out, submit GitHub issues, and give the team feedback on what else you’d like to see in the library.
Categories: .NET, Cloud, DevOps, General Architecture, Microservices, Pivotal | https://seroter.wordpress.com/2016/10/18/using-steeltoe-with-asp-net-4-x-apps-that-need-a-microservices-friendly-config-store/ | CC-MAIN-2016-44 | refinedweb | 1,240 | 51.65 |
Opened 17 months ago
Closed 15 months ago
Last modified 4 months ago
#8848 closed bug (fixed)
Warning: Rule too complicated to desugar
Description
I've a very very modest application of Specialize to fixed sized lists in some of my code which seems to trip up the specialization machinery. Is there any flags I can pass GHC to make sure it doesn't give up on these specialize calls?
is the only work around to write my own monomorphic versions and add some hand written rewrite rules?!
rc/Numerical/Types/Shape.hs:225:1: Warning: RULE left-hand side too complicated to desugar let { $dFunctor_a3XB :: Functor (Shape ('S 'Z)) [LclId, Str=DmdType] $dFunctor_a3XB = Numerical.Types.Shape.$fFunctorShape @ 'Z $dFunctor_a3Rn } in map2 @ a @ b @ c @ ('S ('S 'Z)) (Numerical.Types.Shape.$fApplicativeShape @ ('S 'Z) (Numerical.Types.Shape.$fFunctorShape @ ('S 'Z) $dFunctor_a3XB) (Numerical.Types.Shape.$fApplicativeShape @ 'Z $dFunctor_a3XB Numerical.Types.Shape.$fApplicativeShape0)) src/Numerical/Types/Shape.hs:226:1: Warning: RULE left-hand side too complicated to desugar let { $dFunctor_a3XG :: Functor (Shape ('S 'Z)) [LclId, Str=DmdType] $dFunctor_a3XG = Numerical.Types.Shape.$fFunctorShape @ 'Z $dFunctor_a3Rn } in let { $dFunctor_a3XF :: Functor (Shape ('S ('S 'Z))) [LclId, Str=DmdType] $dFunctor_a3XF = Numerical.Types.Shape.$fFunctorShape @ ('S 'Z) $dFunctor_a3XG } in map2 @ a @ b @ c @ ('S ('S ('S 'Z))) (Numerical.Types.Shape.$fApplicativeShape @ ('S ('S 'Z)) (Numerical.Types.Shape.$fFunctorShape @ ('S ('S 'Z)) $dFunctor_a3XF) (Numerical.Types.Shape.$fApplicativeShape @ ('S 'Z) $dFunctor_a3XF (Numerical.Types.Shape.$fApplicativeShape @ 'Z $dFunctor_a3XG Numerical.Types.Shape.$fApplicativeShape0)))
the associated code (smashed into a single module ) is
{-# LANGUAGE DataKinds, GADTs, TypeFamilies, ScopedTypeVariables #-} {-# LANGUAGE DeriveDataTypeable #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE BangPatterns #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FunctionalDependencies #-} {-# LANGUAGE UndecidableInstances #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE StandaloneDeriving #-} {-# LANGUAGE CPP #-} {-# LANGUAGE DeriveFunctor #-} {-# LANGUAGE TemplateHaskell #-} {-# LANGUAGE NoImplicitPrelude #-} module Numerical.Types.Shape where import GHC.Magic import Data.Data import Data.Typeable() import Data.Type.Equality import qualified Data.Monoid as M import qualified Data.Functor as Fun import qualified Data.Foldable as F import qualified Control.Applicative as A import Prelude hiding (foldl,foldr,init,scanl,scanr,scanl1,scanr1) data Nat = S !Nat | Z deriving (Eq,Show,Read,Typeable,Data) #if defined(__GLASGOW_HASKELL_) && (__GLASGOW_HASKELL__ >= 707) deriving instance Typeable 'Z deriving instance Typeable 'S #endif type family n1 + n2 where Z + n2 = n2 (S n1') + n2 = S (n1' + n2) -- singleton for Nat data SNat :: Nat -> * where SZero :: SNat Z SSucc :: SNat n -> SNat (S n) --gcoerce :: (a :~: b) -> ((a ~ b) => r) -> r --gcoerce Refl x = x --gcoerce = gcastWith -- inductive proof of right-identity of + plus_id_r :: SNat n -> ((n + Z) :~: n) plus_id_r SZero = Refl plus_id_r (SSucc n) = gcastWith (plus_id_r n) Refl -- inductive proof of simplification on the rhs of + plus_succ_r :: SNat n1 -> Proxy n2 -> ((n1 + (S n2)) :~: (S (n1 + n2))) plus_succ_r SZero _ = Refl plus_succ_r (SSucc n1) proxy_n2 = gcastWith (plus_succ_r n1 proxy_n2) Refl type N0 = Z type N1= S N0 type N2 = S N1 type N3 = S N2 type N4 = S N3 type N5 = S N4 type N6 = S N5 type N7 = S N6 type N8 = S N7 type N9 = S N8 type N10 = S N9 {- Need to sort out packed+unboxed vs generic approaches see ShapeAlternatives/ for -} infixr 3 :* {- the concern basically boils down to "will it specialize / inline well" -} newtype At a = At a deriving (Eq, Ord, Read, Show, Typeable, Functor) data Shape (rank :: Nat) a where Nil :: Shape Z a (:*) :: !(a) -> !(Shape r a ) -> Shape (S r) a --deriving (Show) #if defined(__GLASGOW_HASKELL_) && (__GLASGOW_HASKELL__ >= 707) deriving instance Typeable Shape #endif instance Eq (Shape Z a) where (==) _ _ = True instance (Eq a,Eq (Shape s a))=> Eq (Shape (S s) a ) where (==) (a:* as) (b:* bs) = (a == b) && (as == bs ) instance Show (Shape Z a) where show _ = "Nil" instance (Show a, Show (Shape s a))=> Show (Shape (S s) a) where show (a:* as) = show a ++ " :* " ++ show as -- at some point also try data model that -- has layout be dynamicly reified, but for now -- keep it phantom typed for sanity / forcing static dispatch. -- NB: may need to make it more general at some future point --data Strided r a lay = Strided { getStrides :: Shape r a } {-# INLINE reverseShape #-} reverseShape :: Shape n a -> Shape n a reverseShape Nil = Nil reverseShape list = go SZero Nil list where go :: SNat n1 -> Shape n1 a-> Shape n2 a -> Shape (n1 + n2) a go snat acc Nil = gcastWith (plus_id_r snat) acc go snat acc (h :* (t :: Shape n3 a)) = gcastWith (plus_succ_r snat (Proxy :: Proxy n3)) (go (SSucc snat) (h :* acc) t) instance Fun.Functor (Shape Z) where fmap = \ _ Nil -> Nil --{-# INLINE fmap #-} instance (Fun.Functor (Shape r)) => Fun.Functor (Shape (S r)) where fmap = \ f (a :* rest) -> f a :* Fun.fmap f rest --{-# INLINE fmap #-} instance A.Applicative (Shape Z) where pure = \ _ -> Nil --{-# INLINE pure #-} (<*>) = \ _ _ -> Nil --{-# INLINE (<*>) #-} instance A.Applicative (Shape r)=> A.Applicative (Shape (S r)) where pure = \ a -> a :* (A.pure a) --{-# INLINE pure #-} (<*>) = \ (f:* fs) (a :* as) -> f a :* (inline (A.<*>)) fs as --{-# INLINE (<*>) #-} instance F.Foldable (Shape Z) where foldMap = \ _ _ -> M.mempty --{-# fold #-} foldl = \ _ init _ -> init foldr = \ _ init _ -> init foldr' = \_ !init _ -> init foldl' = \_ !init _ -> init instance (F.Foldable (Shape r)) => F.Foldable (Shape (S r)) where foldMap = \f (a:* as) -> f a M.<> F.foldMap f as foldl' = \f !init (a :* as) -> let next = f init a in next `seq` F.foldl f next as foldr' = \f !init (a :* as ) -> f a $! F.foldr f init as foldl = \f init (a :* as) -> let next = f init a in F.foldl f next as foldr = \f init (a :* as ) -> f a $ F.foldr f init as -- map2 :: (A.Applicative (Shape r))=> (a->b ->c) -> (Shape r a) -> (Shape r b) -> (Shape r c ) map2 = \f l r -> A.pure f A.<*> l A.<*> r {-# SPECIALIZE map2 :: (a->b->c)-> (Shape Z a )-> Shape Z b -> Shape Z c #-} {-# SPECIALIZE map2 :: (a->b->c)-> (Shape (S Z) a )-> Shape (S Z) b -> Shape (S Z) c #-} {-# SPECIALIZE map2 :: (a->b->c)-> (Shape (S (S Z)) a )-> Shape (S (S Z)) b -> Shape (S (S Z)) c #-} {-# SPECIALIZE map2 :: (a->b->c)-> (Shape (S (S(S Z))) a )-> Shape (S (S (S Z))) b -> Shape (S (S(S Z))) c #-} -- {-# INLINABLE map2 #-}
Change History (16)
comment:1 Changed 16 months ago by Simon Peyton Jones <simonpj@…>
comment:2 Changed 16 months ago by Simon Peyton Jones <simonpj@…>
comment:3 Changed 16 months ago by simonpj
- Resolution set to fixed
- Status changed from new to closed
- Test Case set to simplCore/should_compile/T8848, T8848a
Thank yuu for reporting this. It's led me to an altogether better treatment for the LHS of rules.
Simon
comment:4 Changed 16 months ago by carter
Thank you! Glad I could accidentally help.
Any chance this might land in 7.8? :) currently my options otherwise are either
- unconditionally inline everything (with the associated costs in code complexity)
- Or write my own hand unrolled routine that has some fast paths for small size inputs, that also gets unconditionally inlined
comment:5 Changed 16 months ago by simonpj
No, it's too late for 7.8 I'm afraid. Possibly 7.8.2.
Maybe you can try
{-# RULE map2 = map2_spec #-} map2_spec :: (a->b->c)-> (Shape Z a )-> Shape Z b -> Shape Z c map2_spec = inline map2
and so on for the other cases. (Untested.)
Simon
comment:6 Changed 16 months ago by carter
figured as such, glad things are shipping! 7.8.2 would be fine
Yeah, I'll be trying out some ideas like that rules soon
comment:7 Changed 16 months ago by carter
- Milestone set to 7.8.2
setting milestone for 7.8.2 so its on the list when that rolls around
comment:8 Changed 16 months ago by thoughtpolice
- Milestone changed from 7.8.2 to 7.8.3
- Status changed from closed to merge
This shouldn't be marked fixed. 7.8.2 will be a critical bugfix release, but I think we'll punt this for consideration to 7.8.3 instead.
comment:9 Changed 15 months ago by thoughtpolice
- Status changed from merge to closed
This didn't properly merge to the 7.8 branch - I think some of Joachim's work (some which probably should not be merged) caused a conflict, and I haven't traced down exactly which commits those are.
As it is, I'm inclined to not merge this, then. I'm marking as fixed - please let me know if someone disagrees.
comment:10 Changed 15 months ago by thoughtpolice
- Milestone changed from 7.8.3 to 7.10.1
comment:11 Changed 15 months ago by carter
@thoughtpolice, if there was a path to getting this into 7.8.3 that I could help with making happen, i'm willing to help do some leg work (though it touches on pieces of GHC i'm not yet familiar with).
I believe I can work around this limitation in SPECIALIZE for now, but if there was a way to help get it into 7.8.3, please let me know.
(though i'll be excited to revisit my engineering on 7.9 / 7.10 on way or another)
comment:12 Changed 5 months ago by yongqli
It is possible that
{-# RULE map2 = map2_spec #-} map2_spec :: (a->b->c)-> (Shape Z a )-> Shape Z b -> Shape Z c map2_spec = inline map2
creates an infinite loop, because we end up with
map2_spec = inline map2_spec
?
My program seems to hang after trying it, but GHC does not throw <<loop>>.
comment:13 Changed 5 months ago by yongqli
@carter, were you able to get a workaround to work? We are experiencing the same issue.
comment:14 Changed 5 months ago by thomie
@yongqli A fix for this issue is supposed to be in 7.10. Please try your code with release candidate 2.
comment:15 Changed 5 months ago by yongqli
@thomie: We are stuck on GHC 7.8 for now :(.
For what it's worth, I was able to work around the problem using the RULES method. I set the rule to fire after phase 1, so that "map2" would have already been inlined away, thus preventing an infinite loop.
comment:16 Changed 4 months ago by simonpj
I don't think that comment:12 has much to do with this ticket although it's hard to tell without a repro case.
It's easy to make GHC diverge using rules. Most crudely
{-# RULE map2 = map2 #-}
would do it, by making the rule fire repeatedly. Your code looks sort of like that, although as I say it is hard to tell.
You can see more of what is happening with -ddump-inlinings and -ddump-rule-firings.
For now I think this probably a user error.
Simon
In 41ba7ccb742278de0abf32cb7571c71b150997a3/ghc: | https://ghc.haskell.org/trac/ghc/ticket/8848 | CC-MAIN-2015-32 | refinedweb | 1,788 | 71.24 |
Overview of the C# version 1.0 to 4.0
Overview of the C# version 1.0 to 4.0
Join the DZone community and get the full member experience.Join For Free
One day I and my (Girl Friend)GF went to the Cafe coffee day to drink coffee after completing office. At that day I carried the C# book in my bag, which I took office to explain the newer version to my colleague. She suddenly looked in my bag and which started below conversation.
GF: why do you always carry the C# book which marked with the different version on it?
Me: The C# developer team released around 4 major version of C# language. Every version comes up with some new feature(s) which makes development easy and simple. As well as cause me to do the change the code accordingly.
GF: How that can be possible every version release causes you to change your code?
ME: (on my laptop) ok. I will show what the revolution in language.
C#1.0
Specification of Version 1
Following is code for the Employee class for my system.
public class EmployeeCompare : IComparer { public int Compare(object x, object y) { Employee emp1 = (Employee)x; Employee emp2 = (Employee)y; return emp1.Name.CompareTo(emp2.Name); } }
Above code is for sorting employee on name property. You can see, I need to convert from general/generic object type into employee first, then i need to compare name property of employee objects
Note here I have implemented IComparer to sort employee.
public class Employee { string _name; public string Name { get { return _name; } set { _name = value; } } int _id; public int ID { get { return _id; } }
Above I have created two properties to store employee name and id, actually here private variable holding value of the property.
One more thing to note here is I have written get block to make ID readonly.
public Employee(int id, string name) { this._id = id; this._name = name; }
Below method create the list of the employee and storing it in the ArrayList.
public ArrayList GetEmployee() { ArrayList al = new ArrayList(); al.Add(new Employee(1, "pranay")); al.Add(new Employee(2, "kruanal")); al.Add(new Employee(2, "hanika")); return al; }
But when I am adding employee object actually I am converting it in the object.
public void displayEmployee() { ArrayList alEmployee = GetEmployee(); alEmployee.Sort(); foreach (Employee emp in alEmployee) { Console.WriteLine(emp.Name); } } }
Above I am requesting the employee class to display all Employee in sorted order by calling displayEmployee method.
Note here I am converting each object of ArraList in employee to display data.
GF: it’s some what similar to java /c++ class like other programming language.
Me:Now we moving to second version of language which change this class.
GF: let's see.
C#2.0
Specification of Version 2
As you see here now I can created IComparer with the specific Type like employee. This is because of the C#2.0 included new feature called Generic.
public class EmployeeCompare : IComparer<Employee> { public int Compare(Employee x, Employee y) { return x.Name.CompareTo(y.Name); } }
Now to compare two employee object I no need to convert object to employee.
public class Employee { string _name; public string Name { get { return _name; } set { _name = value; } } int _id; public int ID { get { return _id; } private set { _id = value; } }
As you can see the properties above I can able to set the the access level to private which make the property readonly.
public Employee(int id, string name) { this._id = id; this._name = name; }
In GetEmployee now I can put the Employee objects directly as Employee because of the Generic feature.
public List<Employee> GetEmployee() { List<Employee><employee> al = new List<Employee>(); al.Add(new Employee(1, "pranay")); al.Add(new Employee(2, "kruanal")); al.Add(new Employee(2, "hanika")); return al; } </employee>
As Generic feature make the List typesafe and there is no need of converting object to employee. So to display employee I no need to covert List object to employee again.
public void displayEmployee() { List<Employee> alEmployee = GetEmployee(); //alEmployee.Sort(); alEmployee.Sort(delegate(Employee x, Employee y) { return x.Name.CompareTo(y.Name); }); foreach (Employee emp in alEmployee) { Console.WriteLine(emp.Name); } } }
Me: So got something new and change of things.
GF: yeah......it's look like big change and cool.
Me: so we are moving on version three of the language.
GF : go on I like it eager to know what new now in this version.
C#3.0
Specification of Version 3
Now the below code is with the next new version of the language.
public class Employee { public string Name { get; set; } public int ID { get; private set; }
Well as you can see here there is no need of the private variable to hold the value of the property. This is because the new feature automic property of the language. This is usefull when I don’t have any logic to manipulate with the value.
Employee() { } public Employee(int id, string name) { this.ID = id; this.Name = name; }
In GetEmployee now I do not need to add the employee object one by one that I can do easily with the object Inialization feature of new version.
public List<Employee> GetEmployee() { return new List<Employee> { new Employee{ Name="pranay", ID=1}, new Employee{ Name="krunal", ID=2}, new Employee{ Name="hanika", ID=3} }; }
Now displayEmployee method here I do not need to implement IComparare or any anonymous method to sort data I can do it easily with the lamda expression or by using extension method provided by the this version of language. like this new feature of this version.
GF:yeah...LINQ feature is just awesome. I just liked it much.
Me: Now the latest version of this language. This version is not doing major changes as its did by the previous version because the latest version more towards the dynamic types in language.
GF: Lets see.
C#4.0
Specification of Version 4
public class Employee { public string Name { get; set; } public int ID { get; private set; }
Now this version allows you to set the default parameter value for the argument in the function which is not possible with the previous version (Optional parameter). So the constructor for the employee looks like as below.
public Employee(string name, int id = 0) { this.ID = id; this.Name = name; }
Note by this feature I do not have to create extra overloading methods in my class.
In GetEmployee method I can set the value of variable by specifying name of the parameter of the function.
public List<Employee> GetEmployee() { return new List<Employee> { new Employee(name : "pranay", id :1 ), new Employee(name : "krunal", id :2 ), new Employee(name : "hanika", id :3 ) }; } this version do the changes in the way I declare parameter in the method and the way I pass value to the method when call.
GF: I like this feature of the optional parameter and the way i can assign the value to parameter. This is really awesome each version of C# came up with the new feature and change the things.
GF: I just say woooo.... this means we can spend more time because each new version save your time.
Published at DZone with permission of Pranay Rana , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/overview-c-version-10-40 | CC-MAIN-2018-34 | refinedweb | 1,238 | 57.87 |
While working for a client recently, I was given a small project to produce a report that would help reconcile differences in data that existed in four to five different database sources. The requirements specified a need to compare roughly 40 fields from each of these sources against each other, and to report the differences in MS Excel format, which included details regarding how the data should be displayed in the spreadsheet.
As it turned out, the challenge was not about the amount of data being processed as I originally had suspected. Instead, the challenge became how to create a potentially large Excel file without causing memory meltdown on the server hardware.
Hopefully by sharing my experience here, it might save a little time for someone else, and thus give back a little bit to the developer community.
If you haven’t read the previous posts in this series (Introducing Spring Batch and Getting Started With Spring Batch), they serve as a quick start guide and simple example for learning the basics of Spring Batch. They also serve as the starting point for this article’s example code.
The Process
For our input data to this job, we’re going to be reading in the data provided from the following URL. It will generate a list of the NYSE traded stock data in CSV format. You can also click this link to download a physical file in order to take a look at the data format so you can see what to expect:
NOTE: If using Internet Explorer, you’ll need to visit the following URL: and look for the CSV download link at the top of the data results. Click the link “Download this list” and there may be a popup window in which you have to enter some text in order to get the download. This doesn’t appear to be an issue with Google Chrome or Firefox, nor is it an issue reading the download in Spring Batch as an input resource.
The reader for this step will be set up almost identically to the reader example in Part Two’s Getting Started With Spring Batch, because it is a CSV file in which we’re specifying a URL as the resource (and not an actual physical file that we are reading in). The only difference between this example and the configuration in Part 2’s example is that we need to define a custom FieldSetMapper specific to the type of data we are mapping from the input file.
Below are the bean configurations required to set up the reader for the step to convert the incoming file from CSV into Excel format. If you downloaded the stock data file and examined its contents, you should have noticed that the first line contains header information that we should not be mapping to a data object. This header information is skipped by adding the “linesToSkip” property to the FlatFileItemReader bean definition as you see below:
<bean name="stockDataReader" class="org.springframework.batch.item.file.FlatFileItemReader"> <property name="resource" value="" /> <property name="lineMapper" ref="stockDataLineMapper" /> <property name="linesToSkip" value="1" /> </bean> <bean name="stockDataLineMapper" class="org.springframework.batch.item.file.mapping.DefaultLineMapper"> <property name="fieldSetMapper" ref="stockDataFieldMapper" /> <property name="lineTokenizer" ref="stockDataLineTokenizer" /> </bean> <bean name="stockDataLineTokenizer" class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer" />
Secondly, we need to create the data object that we will map the incoming file record to. For this particular file, it will look like this:
package com.keyhole.example.poi; import java.io.Serializable; import java.math.BigDecimal; public class StockData implements Serializable { private static final long serialVersionUID = 4383231542218565966L; private String symbol; private String name; private BigDecimal lastSale; private BigDecimal marketCap; private String adrTso; private String ipoYear; private String sector; private String industry; private String summaryUrl; // getters and setters removed for brevity }
Now that we have defined the data object that our file will be mapped to, we need to create the custom FieldSetMapper implementation. It should look like this:
package com.keyhole.example.poi; import java.math.BigDecimal; import org.springframework.batch.item.file.mapping.FieldSetMapper; import org.springframework.batch.item.file.transform.FieldSet; import org.springframework.stereotype.Component; import org.springframework.validation.BindException; @Component("stockDataFieldMapper") public class StockDataFieldSetMapper implements FieldSetMapper<StockData> { public StockData mapFieldSet(FieldSet fieldSet) throws BindException { StockData data = new StockData(); data.setSymbol(fieldSet.readString(0)); data.setName(fieldSet.readString(1)); String lastSaleVal = fieldSet.readString(2); if ("n/a".equals(lastSaleVal)) { data.setLastSale(BigDecimal.ZERO); } else { data.setLastSale(new BigDecimal(lastSaleVal)); } data.setMarketCap(fieldSet.readBigDecimal(3)); data.setAdrTso(fieldSet.readString(4)); data.setIpoYear(fieldSet.readString(5)); data.setSector(fieldSet.readString(6)); data.setIndustry(fieldSet.readString(7)); data.setSummaryUrl(fieldSet.readString(8)); return data; } }
After defining the configuration and implementation of reading the stock data file, we’re ready to move on to implementing our Excel ItemWriter. The two most commonly used open source Java APIs are Apache POI and JExcelAPI. As most people might attest, generating large files typically result in a high memory footprint, as they require building the entire Excel workbook in memory prior to writing out the file.
However, beginning with Apache POI version 3.8-beta3 in June of 2011, developers now have the option to use a low-memory footprint Excel API. Apache POI also has additional advantages in that it is continually evolving and has a strong development community, ensuring that it will be maintained for the foreseeable future.
- If you are using Maven and an Eclipse-based IDE like SpringSource Tool Suite (STS), it’s very simple to obtain the Apache POI API for your project. Setting up a Spring Batch project in STS was detailed in Getting Started With Spring Batch so we won’t go into detail regarding project setup. Right click on the project in STS, select “Maven” and then select “Add Dependency.” In the dialogue box for the search entry, you’ll want to enter POI and look for the result that corresponds to the org.apache.poi package. After that, you’ll need to do the same process for poi-ooxml, likewise selecting the result that corresponds to the org.apache.poi package.
- If you are not using Maven, you’ll need to visit the Apache POI website to manually download the latest version and move the required jars into your lib directory manually. This will include the poi jar, poi-ooxml jar and all of its associated jars. Details of each can be found on the Apache website.
This new Excel API from Apache is named SXSSF. It is an API-compatible streaming extension of XSSF, which is used to create newer Excel 2007-based OOXML (.xlsx) files. It achieves this by limiting access to the number of rows in memory within a sliding window. For example: if you define the sliding window as 50, when the 51st row is created, the first row that is in the window is written to disk. This operation repeats as each new row is created and the oldest row in the window is written to disk. The older rows (that are no longer in the window) become inaccessible since they have been flushed from memory.
To begin using this streaming version SXSSF, it is really just as simple as this:
Workbook wb = new SXSSFWorkbook(100);
By instantiating an SXXFWorkbook object and calling the constructor that accepts an integer as the parameter, we have defined our workbook for streaming with a sliding window of 100 rows.
Since the goal of this example is to reduce the memory footprint, we’re going to be processing the file in chunks of 500. In order to process the file in chunks like this, we’ll need to create our Excel workbook once at the beginning of the step and close the output stream at the very end of the step, while writing the data out in between. To do this, we’re going to create our ItemWriter with methods to be processed before the step and after the step, and implemented using Spring Batch’s built-in annotations.
First, here’s the method that will be created to handle the BeforeStep interception. I have left out the details of how the title and header information were created, but they will be included in the complete code listing near the end.
@BeforeStep public void beforeStep(StepExecution stepExecution) {(); }
The method can be named anything, but by convention I normally name the method BeforeStep just to stay consistent with the purpose and use of the method. By annotating that method with @BeforeStep, this tells Spring Batch that before the step is executed, this method should be called and the StepExecution passed in as a parameter. Typically these methods are used to configure resources that will be used by the bean, whether that bean is a reader, processor, writer or tasklet.
It’s also important to note that if your bean is extending one or more classes, then there can only be one @BeforeStep or @AfterStep annotated method. The code listing here shows that we’re defining the file name and instantiating the workbook with a row sliding window of 100 (which is the same as the default but listed here to show how that would be defined). We also need to create the first sheet, add a title / header info to that sheet, and initialize the cell style that will be used for the data output.
Here is the method on the writer that will be called in the AfterStep phase of job execution:
@AfterStep public void afterStep(StepExecution stepExecution) throws IOException { FileOutputStream fos = new FileOutputStream(outputFilename); workbook.write(fos); fos.close(); }
Just as with the BeforeStep annotated method, this AfterStep annotated method is typically used to wrap up necessary items after a step has completed. Just as their names imply, these methods are called before the step begins to execute and after the step has completed executing. The code listed here will create the output stream necessary for the Excel workbook to write to. And, once that has completed, we need to close the output stream. What’s important to note here is that when we are calling workbook.write(fos) at this point, it’s taking the temp files that were used to stream the Excel data out to disk and assembling them back into an Excel .xlsx file.
So, now that we’ve defined setting up the Excel workbook and closing it out, it’s time to take care of the method that actually takes the data that was read from the input source and converts it into the rows and cells that will make up the detailed data of the Excel file.
Here’s the code listing of Write method:
); } } }
In this Write method, the code is pretty straightforward and simple. As we are looping through the list of StockData objects that were mapped from our input file, we are creating a new row and its cells for each item of data. Since the input file is only a little more than a couple of thousand rows, this wouldn’t be a good test of generating a large Excel file. That’s why you see the additional loop that will create 300 rows for each of the items we’re going to write. By the time the job finishes, we will have generated an Excel file that has a little over 800,000 rows — just to prove we can do it, not that you should.
The two methods below are convenience methods for the actual creation of each individual cell within the row to simplify some repeated code:); }
Putting it all together, here is the complete code listing for the StockDataExcelWriter. One important note regarding this class is its use of the @Scope (“step”) Spring annotation. By default, Spring beans are created as singletons when they are loaded into the Spring context. Since we are holding on to state with a few items (such as the current row being written, the workbook object, and a re-usable cell style), we need the framework to instantiate this StockDataExcelWriter as needed, once per step execution. Otherwise, we could potentially run into some thread-safe issues if this job were to run simultaneously.
package com.keyhole.example.poi; import java.io.FileOutputStream; import java.io.IOException; import java.util.Calendar; import java.util.List; import org.apache.commons.lang3.time.DateFormatUtils;.xssf.streaming.SXSSFWorkbook; import org.springframework.batch.core.StepExecution; import org.springframework.batch.core.annotation.AfterStep; import org.springframework.batch.core.annotation.BeforeStep; import org.springframework.batch.item.ItemWriter; import org.springframework.context.annotation.Scope; import org.springframework.stereotype.Component; @Component("stockDataExcelWriter") @Scope("step") public class StockDataExcelWriter implements ItemWriter<StockData> { private static final String FILE_NAME = "/data/example/excel/StockData"; private static final String[] HEADERS = { "Symbol", "Name", "Last Sale", "Market Cap", "ADR TSO", "IPO Year", "Sector", "Industry", "Summary URL" }; private String outputFilename; private Workbook workbook; private CellStyle dataCellStyle; private int currRow = 0; private void addHeaders(Sheet sheet) { Workbook wb = sheet.getWorkbook(); CellStyle style = wb.createCellStyle(); Font font = wb.createFont(); font.setFontHeightInPoints((short) 10); font.setFontName("Arial"); font.setBoldweight(Font.BOLDWEIGHT_BOLD); style.setAlignment(CellStyle.ALIGN_CENTER); style.setFont(font); Row row = sheet.createRow(2); int col = 0; for (String header : HEADERS) { Cell cell = row.createCell(col); cell.setCellValue(header); cell.setCellStyle(style); col++; } currRow++; } private void addTitleToSheet(Sheet sheet) { Workbook wb = sheet.getWorkbook(); CellStyle style = wb.createCellStyle(); Font font = wb.createFont(); font.setFontHeightInPoints((short) 14); font.setFontName("Arial"); font.setBoldweight(Font.BOLDWEIGHT_BOLD); style.setAlignment(CellStyle.ALIGN_CENTER); style.setFont(font); Row row = sheet.createRow(currRow); row.setHeightInPoints(16); String currDate = DateFormatUtils.format(Calendar.getInstance(), DateFormatUtils.ISO_DATETIME_FORMAT.getPattern()); Cell cell = row.createCell(0, Cell.CELL_TYPE_STRING); cell.setCellValue("Stock Data as of " + currDate); cell.setCellStyle(style); CellRangeAddress range = new CellRangeAddress(0, 0, 0, 7); sheet.addMergedRegion(range); currRow++; } @AfterStep public void afterStep(StepExecution stepExecution) throws IOException { FileOutputStream fos = new FileOutputStream(outputFilename); workbook.write(fos); fos.close(); } @BeforeStep public void beforeStep(StepExecution stepExecution) { System.out.println("Calling beforeStep");(); } private void initDataStyle() { dataCellStyle = workbook.createCellStyle(); Font font = workbook.createFont(); font.setFontHeightInPoints((short) 10); font.setFontName("Arial"); dataCellStyle.setAlignment(CellStyle.ALIGN_LEFT); dataCellStyle.setFont(font); } ); } }
Here’s the Spring Batch configuration for the job:
<batch:job <batch:step <batch:tasklet <batch:chunk </batch:tasklet> </batch:step> </batch:job>
And now that we have proven that we can create huge Excel files, there are a few limitations to this approach as listed on the Apache POI website:
- Only a limited number of rows are available at a point in time, which are the rows that remain in the window and haven’t been written to disk yet.
- Sheet.clone() is not supported.
- Formula evaluation is not supported.
Troubleshooting
There is one issue that I came across that took me a little while to resolve. If you use the OOXML formats with POI you might come across this error:
“Excel found unreadable content in ‘PoiTest.xlsx’. Do you want to recover the contents of this workbook? If you trust the source of this workbook, click Yes.”
And upon clicking “Yes,” it is followed up by an error dialogue similar to this:
This usually means that you have made a mistake in defining a style somewhere in your code. By clicking the link to the log at the bottom, there’s a good chance you’ll get pointed in the right direction of where the issue is. Hopefully this little nugget of information will save you a little time researching the error.
Conclusion
So now that we’re done, we have shown that there is a viable way of generating extremely large Excel workbooks without bringing the server to its knees.
The real question now becomes: do you really need this in Excel format? Does this 800,000+ row workbook provide any real value to the business? Just because you can, doesn’t always mean that you should. But sometimes it’s just fun to find out if you
Apache POI (Excel):
JExcel API:
Spring Batch:
Pingback: Introducing Spring Batch, Part One « Keyhole Software
Pingback: Getting Started With Spring Batch, Part Two « Keyhole Software
Nice Article using Spring Batch with POI, but I am looking for POI generating .xls files , We have requirement of generating .xls file in our web app with huge list of Trades , like 60,000 to 1,00,000 . How will Spring Batch address that with using HSSFWorkBook instead of using SXSSFWorkbook ?
Hi Rahul, thanks for the question. You can still use Spring Batch and the HSSF model together, but unfortunately Spring Batch won’t address the memory issues associated with large .xls files. Because the HSSF model is based upon the older binary Excel version, it still requires you to create the entire workbook object in memory prior to writing the file out. By using the SXSSF model you don’t have to keep the entire Excel workbook in memory and instead it will periodically write out portions of the file keeping the memory low. There’s a small chart on the bottom of the POI Spreadsheet page that lists the different models and their features ( ). Based upon that chart, the SXSSF model is the only one that supports buffered streaming when writing files.
Pingback: Scaling Spring Batch – Step Partitioning | Keyhole Software
There seems to be problem with the 2 features, i.e., sliding window of excel(SXSSFWorkbook) and commit interval of spring batch.
I tried writing 50K records. For every 10K I am creating a new sheet. So my expected output should be 10K in each.
But with SXSSFWorkbook(500) and commit-interval=500, the data seems to break in wrong way in the 2nd to last sheets.
Although the data is written correctly when its going into one sheet.
[Note: I am doing this workaround just because of the 1048575 limit of SXSSFWorkbook in excel.]
Sid.
Thanks Sid, I’ll check this out a little later in the week and see if I can replicate the issue. Are you using this exact code plus a few modifications to write the new worksheets? Or are there quite a few differences?
-jonny
Thanks Jonny for getting back. There is change from my last update.
Just a minor change in code near the creating row logic. Here is code I have changed. Besides I am not using the inner loop of 300, as my reader query already fetches around 4252362 records. Hope you can find the reason I am lossing data in the subsequent sheets. Is it bcos of some clash in sliding window of SXSSFWorkbook and commit-interval in spring batch.
Row row=null;
try {
row = sheet.createRow(currRow);
} catch (IllegalArgumentException iae) {
String strMessage=”Invalid row number (1048576) outside allowable range (0..1048575)”;
if(strMessage.equals(iae.getMessage())){
System.out.println(“Exceeded limit”);
currRow=0;
sheet = workbook.createSheet();
row = sheet.createRow(currRow);
}
}
To look at the above problem at a smaller scope. I have a query which returns only 48K records. Then I change the above logic to code below. The output(xlsx) I get is 10K records in first sheet but only 3 rows in subsequent 4 sheets.
if(currRow % 10000==0){
System.out.println(“1 million crossed”);
currRow=0;
sheet = workbook.createSheet();
}
In both the scenarios: commit-interval=500 and slide window for SXSSFWorkbook(100).
Hi Jonny Hackett,
It is really an awesome blog and thanks for sharing your experience.
Actually we are trying to have a custom item reader for excel sheets to load the data from excel to db. But I guess we are missing something some where while customizing. Could you please help us regarding this and provide us some sample program on this. It would be of great help.
Thanks.
It is really an awesome blog and thanks for sharing your experience. Nobody has mention about @BeforeStep & @AfterStep
Hi Jonny, Thank you for your feedback, it is really interesting, but I wonder why not using framework like jasper report to do this kind of work besides jasper is based on apache poi. Dont you Think that it will make code base clearer and simpler? What is your opinion ?
Hi Jonny,
could be you please show me the full spring configuration file?
i am facing issue while doing the configuration of spring batch for generation of huge (millions of records ).xlsx file .
your prompt reply highly appreciated
Regards,
Ashish
I am very lucky to read the article. And I appreciate that your hard work. Also I have some questions: 1.you prevent OOM by generating a big file by split into different chunks, but in the real project. the users may hope just one file. ? So . how to make it out ? Thanks again | https://keyholesoftware.com/2012/11/12/generating-large-excel-files-using-spring-batch-part-three/ | CC-MAIN-2018-43 | refinedweb | 3,434 | 54.63 |
In TurboGears 1.0, you could easily drop MochiKit into every page. You just added an entry to your .cfg file, and the script import would appear. In the past few years, a number of JavaScript libraries have burst on the scene, and every developer has his or her favorite. For TG2, we decided to leave the JavaScript Library choice up to you.
Luckily, TG has provided wrappers for all of the major JS libraries, including:
- JQuery
- Dojo
- Extjs
- Yui
- Mootools
- and yes, Mochikit
The easiest way to take advantage of these ToscaWidget wrapper libraries is to install them, and then inject the main JavaScript widget into the WSGI environment for every page. Let’s see how we do this with Dojo. First, we need to install tw.dojo:
easy_install tw.dojo
Then, we want to modify the base controller in our project so that it injects the js file link on every page call. Open up the mytgapp/lib/base.py file. Add the import for your selected JS app at the top of the file, in our case, this is dojo_js:
from tw.dojo import dojo_js
Next, modify the __call__ method of the BaseController. Call the inject method inside the __call__ method:
dojo_js.inject()
You should now see a JavaScript link in your HTML:
<script type="text/javascript" src="/toscawidgets/resources/tw.dojo/static/1.3.2/min/dojo/dojo.js" djConfig="isDebug: false, parseOnLoad: true"></script>
That’s pretty much it. You have to figure out what library uses what name for thier js widgets, but most of them are fairly obvious. The other alternative is to put the file in your static directory, and add it directly to your master.html template. | http://www.turbogears.org/2.1/docs/main/GlobalJSLib.html | CC-MAIN-2015-06 | refinedweb | 286 | 66.13 |
Please comment on this removal. The check 'defined(MACH_HYP) && 0' never evaluates to TRUE, so I'm guessing this was a way to comment out this code. I don't see hyp_console_write() anywhere defined except in xen, so there should be some checking for that too. Plus, the call itself look like it needs some rewrite. I'm not sure about this removal, so if there is a reason to keep this code, please ignore this patch. * device/cons.c [defined(MACH_HYP) && 0]: Remove hyp_console_write() call. --- device/cons.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/device/cons.c b/device/cons.c index 94d4ebf..92f1481 100644 --- a/device/cons.c +++ b/device/cons.c @@ -151,15 +151,6 @@ cnputc(c) kmsg_putchar (c); #endif -#if defined(MACH_HYP) && 0 - { - /* Also output on hypervisor's emergency console, for - * debugging */ - unsigned char d = c; - hyp_console_write(&d, 1); - } -#endif /* MACH_HYP */ - if (cn_tab) { (*cn_tab->cn_putc)(cn_tab->cn_dev, c); if (c == '\n') -- 1.8.1.4 | http://lists.gnu.org/archive/html/bug-hurd/2013-09/msg00093.html | CC-MAIN-2017-22 | refinedweb | 162 | 66.94 |
This .
Since its introduction. while the various setup and port/peripheral control will be micro specific. An example quoted to me – as a non believer – was: to create a stopclock function would take 2/3 days in C or 2 weeks in assembler. it has evolved and been standardized throughout the computing industry as an established development language. Fine on the larger program memory sized devices but not so efficient on smaller devices. whereas the H8 is 0=Input and 1=Output. but Microcontrollers and Microprocessors are different breed. This is fine when working with PC’s and mainframes. C is a portable language intended to have minimal modification when transferring programs from one computer to another. One of the first platforms for implementation was the PDP-11 running under a UNIX environment. .Introduction Why use C? The C language was development at Bell Labs in the early 1970’s by Dennis Ritchie and Brian Kernighan. The main program flow will basically remain unchanged. The use of C in Microcontroller applications has been brought about by manufacturers providing larger program and RAM memory areas in addition to faster operating speeds. ‘Ah’ I hear you say as you rush to buy a C compiler – why do we bother to write in assembler? It comes down to code efficiency – a program written in assembler is typically 80% the size of a C version.
memory. PICmicro®MCU Based Program Development Engineers starting development on PC based products have the luxury of basic hardware pre-wired (i. I find the easiest way to begin any development is to start with a clean sheet of paper together with the specification or idea. and attach the development board to a comm. Some of the simplest tasks can take a long time to develop and to perfect in proportion to the overall product – so be warned where tight timescales are involved. Port on a PC to enable the message to be viewed.. If we could get the whole of a PC in a 40 pin DIL package (including monitor and keyboard) we would use it. Start by drawing out a number of possible solutions and examine each to try to find the simplest and most reliable option. We will continue to use Microcontrollers like the PIC for low cost and portable applications. port within the PIC.PC Based vs. Draw out a flow chart. The product development then comes down to writing the software and debugging the errors. processor.e. A PC programmer could write the message “Hello World” and after compiling. printer and visual display (screen)). block diagram. set up the comm. The development tools for PIC based designs offer the developer basically the same facilities as the PC based development with the exception of the graphics libraries. have the message displayed on the screen. The PIC programmer would have to build an RS232 interface. keyboard. I/O connection plan or any suitable drawing to get started. ‘Why bother’ I hear you say (and so did I). Those embarking on a PIC based design have to create all the interfaces to the outside world in the form of input and output hardware. To design a product one needs: time – peace and quiet – a logical mind and most important of all a full understanding of the requirements. Do not discard the other ideas at this stage as there are possibly some good thoughts there. today’s miniaturization does not reach these limits. Build up a prototype board or hardware mimic board with all the I/O 8 . Product Development Product development is a combination of luck and experience. It comes down to portability of the end product. I/O.
configured. Then start writing code – in testable blocks – and gradually build up your program. Software The information that the Microcontroller needs to operate or run. it has almost unlimited applications. memory. signal conditioning circuits and all the components – connected to it 9 . Pascal or Assembler (one level up from writing your software in binary). Build up the program in simple stages – testing as you go. metal and purified sand. power supplies. This saves trying to debug 2000 lines of code in one go! If this is your first project – THEN KEEP IT SIMPLE – try switching an LED or two on and off from push buttons to get familiar with the instructions. This needs to be free of bugs and errors for a successful application or product. Terminology Let’s start with some basic terminology used. Software can be written in a variety of languages such as C. interface components. When software controls a microcontroller. Now let’s get started with the general terms. Before the design process starts. Don’t forget I/O pins can be swapped to make board layout easier at a later date – usually wit minimal modification to the software. The Idea An idea is born – maybe by yourself in true EUREKA style or by someone else having a need for a project – the basic concept is the same. Hardware The Microcontroller. the basic terminology needs to be understood – like learning a new language. terms and development kit) needs to be thoroughly understood before the design can commence. Microcontroller A lump of plastic. I/O is needed in most cases to allow the microcontroller to communicate. assembly technique and debugging before attempting a mammoth project. I/O A connection pin to the outside world which can be configured as input or output. Rework your flowchart to keep it up to date. control or read information. which without any software. the PIC language (instruction set. So in the case of Microcontroller designs based on the PICmicro®MCU. some facts about the PIC and the difference between Microprocessor and Microcontroller based systems. does nothing.
Source File A program written in a language the assembler and you understand. The . Simulator The MPLAB® development environment has its own built-in simulator which allows access to some of the internal operation of the microcontroller. Another product for 16C5x development is the SIM ICE – a hardware simulator offering some of the ICE features but at a fraction of the cost.OBJ or .ERR) contains a list of errors but does not give any indication as to their origin. simulator or ICE understands to enable it to perform its function. Full trace.LST Other Files The error file (. Both the PICSTART PLUS and PROMATE II from Microchip connect to the serial port. They come in all shapes and sizes and costs vary. Another way of looking at (especially when it does not work) is that you can kick hardware. Error checking is built in. however. Programmer A unit to enable the program to be loaded into the microcontroller’s memory which allows it to run without the aid of an ICE. The source file has to be processed before the Microcontroller will understand it. File extension is . The file extension is . Assembler / Compiler A software package which converts the Source file into an Object file. available. In Circuit Emulator (ICEPIC or PICmicro®MCU MASTER) a very useful piece of equipment connected between your PC and the socket where the Microcontroller will reside.to make it work and interface to the outside world. Object File This is s file produced by the Assembler / Compiler and is in a form which the programmer. The ICE allows you to step through a program.COD file is used by the emulator. List File This is a file created by the Assembler / Compiler and contains all the instructions from the Source file together with their hexadecimal values alongside and comments you have written. a heavily used feature in debugging a program as errors are flagged up during the assembly process. step and debug facilities are. MPASM is the latest assembler from Microchip handling all the PIC family.HEX depending on the assembler directive. 10 . If an event occurs ‘somewhere about there’. This is a good way of testing your designs if you know when events occur. This is the most useful file to examine when trying to debug the program as you have a greater chance of following what is happening within the software than the Source file listing. It enables the software to be run on the PC but look like a Microcontroller at the circuit board end. watch what happens within the micro and how it communicates with the outside world. you might find the simulator restrictive.
Each section can vary in complexity from the basic to all bells and whistles. analog and special functions and is the section which communicates with the outside world.Bugs Errors created free of charge by you. These range from simpel typin errus to incorrect use of the software language syntax errors. others will have to be sought and corrected by trial and error. ROM. I/O and Memory – with the addition of some support circuitry. EPROM. DATA I/O DIGITAL PWM ANALOG RS232 I2C ADDRESS CPU 4. 8. EEPROM or any combination of these and is used to store the program and data. The memory can be RAM.LST file. The central processor unit (CPU) is the heart of the system and can work in 4. An oscillator is required to drive the microprocessor. 8. compute the results and then output the information. or 16 bit data formats to perform the calculations and data manipulation. 16 BIT ADDRESS MEMORY RAM EPROM EEPROM WATCHDOG TIMER OSCILLATOR TYPICAL MICROPROCESSOR SYSTEM Taking each one in turn: Input/output (I/O) can comprise digital. 11 . The oscillator can be made from discrete components or be a ready made module. Microprocessor A microprocessor or digital computer is made up of three basic sections: CPU. Its function is to clock data and instructions into the CPU. Most of these bugs will be found by the compiler and shown up in a .
memory and special functions to meet most requirements of the development engineer. Conventional microcontrollers tend to have one internal bus handling both data and program. watchdog and I/O incorporated within the same chip. You will find many general books on library shelves exploring the design of microcontrollers. memory.Other circuitry found associated with the microprocessor are the watch dog timer – to help prevent system latch up. The PIC family of microcontrollers offers a wide range of I/O. The throughput rate is therefore increased due to simultaneous access to both data and program memory. GOTO or bit testing instructions (BTFSS. with the exception of CALL. It is normal to refer to a Microprocessor as a product which is mainly the CPU area of the system. This can occur in a non Harvard architecture microcontroller using 8-bit busses. buffering for address and data busses to allow a number of chips to be connected together without deteriorating the logic levels and decode logic for address and I/O to select one of a number of circuits connected on the same bus. Microcontrollers The PICmicro®MCU. Why use the PIC Code Efficiency The PIC is an 8 bit Microcontroller based on the Harvard architecture – which means there are separate internal busses for memory and data. Instruction Set There are 33 instructions you have to learn in order to write software for the 16C5x family and 14 bits wide for the 16Cxx family. so the subject will not be expanded or duplicated here other than to explain the basic differences. but in some circumstances can limit the design to a set memory size and I/O capabilities. design time and external peripheral timing and compatibility problems. Each instruction. Speed The PIC has an internal divide by 4 connected between the oscillator 12 . This slows operation down by at least a factor of 2 when compared to the PICmicro®MCU. is a Microcontroller and has all the CPU. There is no likelihood of the software jumping onto the DATA section of a program and trying to execute DATA as instructions. INCFSZ). oscillator. This saves space. Address Bus and Address Decoding to enable correct operation. microprocessors and computers. The I/O and memory would be formed from separate chips and require a Data Bus. executes in one cycle. Safety All the instructions fit into a 12 or 14 bit program memory word. on the other hand.
Drive Capability The PIC has a high output drive capability and can directly drive LEDs and triacs etc. In practice you would not actually do this. This makes instruction time easy to calculate. Any I/O pin can sink 25mA or 100mA for the whole device. I/O lines. temperature. especially where space is at a premium. Versatility The PIC is a versatile micro and in volume is a low cost solution to replace even a few logic gates. Options A range of speed. The PIC is a very fast micro to work with e. In Sleep. the PIC takes only its standby current which can be less the 1uA. Each instruction cycle then works out at 1 uS.and the internal clock bus. a 20MHz crystal steps through a program at 5 million instructions per second! – almost twice the speed of a 386SX 33! Static Operation The PIC is a fully static microprocessor. you would place the PIC into a Sleep mode – this stops the clock and sets up various flags within the PIC to allow you to know what state it was in before the Sleep. package. timer functions. A/D and memory sizes is available from the PIC family to suit virtually all your requirements. serial comms.g. if you stop the clock. 13 . in other words. all the register contends are maintained. especially if you use a 4 MHz crystal.
PIC FUNCTION BLOCK DIAGRAM PIC16F84A(14Bit) BLOCK DIAGRAM 14 .
The tools for development are readily available and are very affordable even for the home enthusiast. In Circuit Emulator and necessary hardware for the PIC can be prohibitive at the evaluation stage of a project. the code will fall over at some point or other.. Development The PIC is available in windowed form for development and OTP (one time programmable) for production. Use mixed case names to improve the readability ErrorCheck is easier than ERRORCHECK Prefix names with a lowercase letter of their type. The following recommendations were taken from a C++ Standards document and have been adapted for the PIC. The C compiler supplied on this disk was obtained from the Internet and is included as a test bed for code learning. Trying and Testing Code Getting to grips with C can be a daunting task and the initial outlay for a C compiler. the contents of the program memory cannot be read out in a way that the program code can be reconstructed. If the foundations are weak. Basic code examples and functions can be tried.Security The PICmicro®MCU has a code protection facility which is one of the best in the industry. again to improve readability: g Global gLog. r Reference rStatus(). Braces{} Braces or curly brackets can be used in the traditional UNIX way if (condition) { ……………. } or the preferred method which is easier to read if (condition) 15 . tested and viewed before delving into PIC specific C compilers which handle I/O etc. Once the protection bit has been programmed. s Static sValueIn.
b=0. or could someone else follow your program as it stands today? Use comments to mark areas where further work needs to be done. You know how your program operates today but in two weeks or two years will you remember. errors to be debugged or future enhancements to the product. Else If Formatting Include an extra Else statement to catch any conditions not covered by the preceding if’s if (condition) { } else if (condition) { } else { ………. 16 .{ …………….. Indent text only as needed to make the software readable. } Tabs and Indentation Use spaces in place of tabs as the normal tab setting of 8 soon uses up the page width. Also. /* catches anything else not covered above */ } Condition Format Where the compiler allows it. Comments Comments create the other half of the story you are writing. The value is also placed in a prominent place. Line Length Keep line lengths to 78 characters for compatibility between monitors and printers. If one = is omitted. the compiler will find the error for you. if ( 6 == ErrorNum) … Initialize All Variables Set all variables to a known values to prevent ‘floating or random conditions’ int a=6. tabs set in one editor may not be the same settings in another – make the code portable. always put the constant on the left hand side of an equality / inequality comparison.
Other programs will loop back towards the start point such as traffic light control. you need not only the software hooks but also the physical hardware to connect the micro to the outside world. The 14 bit core (PIC16Cxx family) reset at 00h. 3FFh. Using C and a PC is straightforward as the screen. a routine to set up a baud rate for communications. One of the most widely used first programming examples in high level languages like Basic or C is printing ‘Hello World’ on the computer screen. 17 . the 12 bit core (PIC16C5x and 12C50x) reset at the highest point in memory – 1FFh.g. The finish point would be where the program stops if run only once e. The basic hooks need to be placed in the program to link the program to the peripherals. 7FFh. Such a system is shown below. keyboard and processor are all interconnected.Basics All computer programs have a start. When developing a program for the PICmicro® MCU or any microprocessor / microcontroller system. The start point in Microcontrollers is the reset vector.
Start with a simple code example – not 2000 lines of code! In Assembler this would be:main btfss got bsf btfsc goto bcf goto porta. The Millennium board contains all the basic hardware to enable commencement of most designs while keeping the initial outlay to a minimum.test .loop .turn . A simple program I use when teaching engineers about the PIC is the ‘Press button – turn on LED’. speeds up the development costs and engineer’s headaches. The hardware needed to evaluated a design can be a custom made PCB.loop . tested and debugged.switch lp1 portb.led main . You WILL need a PIC programmer such as the PICSTART Plus as a minimal outlay in addition to the C compiler. The use of the ICE.loop for switch closure until pressed on led for switch open until released off led back to start lp1 In C this converts to 18 . Assemble the following hardware in whichever format you prefer. through not essential. The initial investment may appear excessive when facing the start of a project.led porta.turn .test .switch main portb. protoboard or an off the shelf development board such as our PICmicro®MCU Millennium board contains (someone had to do one!).DATA ICE DATA PC COMMS TARGET BOARD I/O Using such a layout enables basic I/O and comms to be evaluated. but time saved in developing and debugging is soon outstripped.
the code looks like this:main() { set_tris_b(0x00). the more efficient C becomes in code usage.0 else 000C output_low(PIN_B0). 19 . This is not a fair example on code but as programs get larger. while(true) { if (input(PIN_A0)) output_high(PIN_B0). while(true) { if (input(PIN_A0)) output_high(PIN_B0). 000D } 000E GOTO 009 } As you can see. //set port b as outputs //test for switch closure //if closed turn on led //if open turn off led 0007 0008 MOVLW TRIS 00 6 0009 000A 000B BTFSS GOTO BSF GOTO BCF 05. the compiled version takes more words in memory – 14 in C as opposed to 9 in Assembler.0 00D 06.0 00E 06. else output_low(PIN_B0).main() { set_tris_b(0x00). } } When assembled..
A definition also allocates the storage needed for variables and functions. Preprocessor directive A preprocessor directive is a command to the C preprocessor (which is automatically invoked as the first step in compiling a program). expressions. statements and functions. definitions. and types used in the program. which includes the text of an external file into a program. Expression An expression is a combination of operators and operands that yields a single value. The two most common preprocessor directives are the #define directive. Braces enclose the body of a function. functions. Declaration A declaration establishes the names and attributes of variables.1 The Structure of C Programs All C program contain preprocessor directives. and statements that performs a specific task. Functions may not be nested in C. declarations. Global variables are declared outside functions and are visible from the end of the declaration to the end of the file. and the #include directive. Function A function is a collection of declarations. which substitutes text for the specified identifier. 22 . Definition A definition establishes the contents of a variable or function. expressions. definitions. Statement Statements control the flow or order of program execution in a C program. A local variable is declared inside a function and is visible form the end of the declaration to the end of the function.1.
main Function All C programs must contain a function named main where program execution begins. Example: General C program structure #include <stdio. Statements are the parts of the program that actually perform operations. /* pass a value to a function */ area = PI * radius_squared. #include <stdio.area).142 float area. The following example shows some of the required parts of a C program. indentations. The braces that enclose the main function define the beginning and ending point of the program.4f square units\n”. /* assignment statement */ printf(“Area is %6.h> /* My first C program */ main() { printf(“Hello world!”). } /* end of main function & program */ square(int r) { int r_squared. but also for those who bravely follow on. */ } /* function head */ /* declarations here are known */ /* only to square */ /* return value to calling statement 1. return(r_squared). Functions are subroutines. /* /* /* /* preprocessor directive */ include standard C header file */ global declaration */ prototype declaration */ main() { /* beginning of main function */ int radius_squared.h> #define PI 3. 23 . All C programs contain one or more functions. improve the readability – not only for yourself at a later date. int square (int r). blank lines and comments. r_squared = r * r. /* local declaration */ int radius = 3. /* declaration and initialization */ radius_squared = square (radius).2 Components of a C program All C programs contain essential components such as statements and functions. each of which contains one or more statements and can be called upon by other parts of the program. When writing programs.
The end-of-line charater is not recognized by C as a line terminator. This is the entry point into the program. 1.3 #pragma 24 .). beginning with the open curly brace and ending with the closed curly brace. All C programs must have a main() function. contains most of the input and output functions. Comments are ignored by the compiler and therefore do not affect the speed or length of the compiled code.h> tells the compiler to include the source code from the file ‘stdio.) at the end to inform the compiler it has reached the end of the statement and to separate it from the next statement. Failure to include this will generally flag an error in the NEXT line. The header file stdio. The if statement is a compound statement and the . It is necessary to use only the include files that pertain to the standard library functions in your program. needs to be at the end of the compound statement: if (ThisIsTrue) DoThisFunction(). Almost all C statements end with a semicolon (. Therefore.h’ into the program.h which is called the STandarD Input and Output header file. All functions have the same format which is: FunctionName() { code } Statements within a function are executed sequentially. A header file contains information about standard functions that are used in the program.h stands for header file. The curly braces { and } show the beginning and ending of blocks of code in C. Tradional comments are preceded by a /* and end with a */. The extension . presents a typical C statement. Finally. /* My first C program / is a comment in C. All statements have a semi-colon (. the statement printf(“Hello world!”). Newer style comments begin with // and go to the end of the line.} The statement #include <stdio. there are no constraints on the position of statements within a line or on the number of statements on a line.
The keyword void may optionally appear between the ( and ) to clarity there are no parameters.h> #use rs232(baud=9600. The function toupper is found in the header file CTYPE. #include <16c71.xmit=PIN_B0.The pragma command instructs the compiler to perform a particular action at the compile time such as specifying the PICmicro®MUC being used or the file format generated. No parameters can be placed in the ( ) brackets which follow.H. #pragma device PIC16C54 In CCS C the pragma is optional so the following is accepted: #device pic16c54 1. main() { body of program } 1.rcv=PIN_B1) main() { printf(“Enter characters:”). Both of these header files 25 . while(TRUE) putc(toupper(getc())). As main is classed as a function. } The definitions PIN_B0 and PIN_B1 are found in the header file 16C71.H.4 main() Every program must have a main function which can appear only once. all code which follows must be placed within a pair of braces { } or curly brackets. (denoted by a .h> #include <ctype.5 #include The header file. . comments cannot be nested. . or variable name. use the const keyword in a variable declaration. The first format is used by all C compilers and is /* This is a comment */ The second format is supported by most compilers and is // This is a comment EXERCISE: Which of the following comments is valid? invalid? /* My comment is very short */ /* My comment is very. A comment can be placed anywhere in the program except for the middle of a C keyword. All comments ate ignored by the compiler. very. function name. For example: #define NOT_OLD (AGE<65) . Finally. very. For example: char const id[5]={“1234”}. 1. very. it is resolved at compile time. You can #define any text. very.# are called pre-processor directives. very. very. very long and is valid */ /* This comment /* looks */ ok.9 Comments Comments are used to document the meaning and operation of the source code. very. if NOT_OLD printf(“YOUNG”). To save constants in the chip ROM. but is invalid */ 29 . We use five locations to hold the string because the string should be terminated within the null (\0) character. Comments have two formats. . Comments can be many lines long and may also be used to temporarily remove a line of code. . #define data is not stored in memory.
See also section 3. function1() and function2().1. Traditionally main() is not called by any other function. the program will start executing code one line after the point at which the function was originally called. Macros are used to enhance readability or to save typing. function1().“). The format for a C program with many functions is: main() { function1() { } function2() { } } main() is the first function called when the program is executed. When parameters are used it is called a macro. main(). Most programs that you will write will contain many functions.10 Functions Functions are the basic building blocks of a C program. main() { printf(“I “). 1. can be called by any function in the program. printf(“c.11 Macros #define is a powerful directive as illustrated in the previous section. All C programs contain at least one function. A simple macro: 30 . } One reminder when writing your own functions is that when the closed curly brace of a function is reached. C allows defines to have parameters making them even more powerful.1. } function1() { printf(“like “). however. there are no restrictions in C. The following is an example of two functions in C. The other functions.
var(a. #else output_high(PIN_B0).#define var(x.v) unsigned int x=v.y). Example: #define DEBUG #ifdef DEBUG printf(“ENTERING FUCT X”). 1. Consider the following example: #define HW_VERSION 5 #if HW_VERSION>3 output_high(PIN_B0).2) var(c.3) is the same as: unsigned int a=1. unsigned int b=2.12 Conditional compilation C has some special pre-processor directives that allow sections of code to be included or excluded from compilation based on compile time settings. unsigned int c=3.1) var(b. The #if is evaluated and finished when the code is compiled unlike a normal if that is evaluated when a program runs.B) (A>B)?A:B) z=MAX(. There may be dozens of these #if’s in a file and the same code could be compiled for different hardware version just by changing one constant.13 Hardware Compatibility 31 . // z will contain the larger value x or y 1. #endif The above will compile only one line depending on the setting of HW_VERSION. Another example that will be more meaningful after reading the expressions chapter: #define MAX(A. #endif In this example all the debugging lines in the program can be eliminated from the compilation by removing or commenting on the one #define line. #ifdef simply checks to see if an ID was #defined.
The compiler needs to know about the hardware so the code can be compiled correctly. The last line tells the compiler what the oscillator speed is.0 #byte portb=6 #byte intcon=11 1. In C.rcv=PIN_C7) #use i2c(master. C variables may be created and mapped to hardware registers. Examples: #bit carry=3. they may be used in a program as with any other variable. All C keywords must be in lowercase for the compiler to recognize them. In this case the high speed oscillator and no watch dog timer.nowdt #use delay(clock=8000000) The first line included device specific #define such as the pin names. The second line sets the PICmicro®MCU fuses. The following are some other example lines: #use rs232(buad=9600. A typical program begins as follows: #include <16c74. auto break case char const continue default do EXERCISE: 32 double else enum extern float for goto if int long register return short signed sizeof static struct switch typedef union unsigned void volatile while .scl=PIN_B6.xmit=PIN_C6. The following is a list of the keywords which are reserved from use as variable names. Typically.14 C Keywords The ANSI C standard defines 32 keywords for use in the C language. certain words are reserved for use by the compiler to define data types or for use in loops. After they are defined. In addition. many C compilers will add several additional keywords that take advantage of the processor’s architecture. These variables may be bits or bytes.h> #fuses hs. These are required to compile and RCW these programs.sda=PIN_B7) The example program in this book do not show these hardware defining lines.
using a printf() statement. Write a program that prints your name to the screen. display the value of year on the screen. This variable should be given the value of the current year and then. The result of your program should look like this: The year is 1998 33 .1. Write a program that declares one integer variable called year. 2.
Variables An important aspect of the C language is how it stores data. The topics discussed in this chapter are: data type declarations assignments data type ranges type conversions 34 . This chapter will examine more closely how variables are used in C to Store data.
To make arithmetic operations easier for the CPU. and long int. to convert the signed number 29 into 2’s complement: 35 . C allows a shorthand notation for the data types unsigned int. or long without the int. Simply use the word unsigned. short. For example.. The next table shows the possible range of values for all the possible combinations of the basic data types and modifiers.1 Data Types The C programming language supports five basic data types and four type modifiers.4E+38 NOTE: See individual C compiler documentation for actual data types and numerical range. To find the 2’s complement of a number simply invert all the bits and add a 1 to the result. C represents all negative numbers in the 2’s complement format. short int.4E-38 to 3.2. The following table shows the meanings of the basic data types and type modifiers.
printf(“%d %u\n”. main() { 0007: MOVLW 0008: MOVWF 0009: MOVLW 000A: MOVWF } E0 11 2E 12 EXERCISE: 1. main() { int i. The following code extract assigns the lower word (E0) to register 11h and the upper word (2E) to 12h long a = 12000. Variables are declared in the following manner: type variable_name. To understand the difference between a signed number and an unsigned number. i =u. The unsigned integer 35000 is represented by –30536 in signed integer format. respectively. Where type is one of C’s valid data types and variable_name is the name of the variable. 12000 in hex is 2EE0.00011101 = 11100010 1 11100011 = 29 invert all bits add 1 -29 Example of assigning a long value of 12000 to variables a. Write this statement in another way: long int i. 36 . /* signed integer */ unsigned int u. 2. } 2. u). /* unsigned interger */ u = 35000. The variables are called local and global. i. type in the following program.2 Variable Declaration Variables can be declared in two basic places: inside a function or outside all functions.
global variables are not destroyed until the execution of the program is complete. Global variables. The following example shows how global variables are used. The value of a local variable cannot be accessed by functions statements outside of the function.Local variables (declared inside a function) can only be used by statements within the function where they are declared. return 0. for(i=0. count++) f2(). } This program will print the numbers 0 through 9 on the screen ten times. It is acceptable for local variables in different functions to have the same name. Global variables must be declared before any functions that use them. } main() { f1(). on the other hand. count<10.count). Consider the following example: void f2(void) { int count. for (count = 0 . 37 . int max. Local variables must also be declared at the start of the function before the statements. can be used by many different functions. The operation of the program is not affected by a variable name count located in both functions. f1() { int i.i). The most import thing to remember about local variables is that they are created upon entry into the function and destroyed when the function is exited. Most importantly. } f1() { int count. i<max. count++) print(“%d \n”. for (count=0. count < 10 .i++) printf(“%d “.
EXERCISE: 1. In f1() the local variable count overrides the usage of the global variable.count). int count. f1(). printf(“count in main(): %d\n”. return 0.count). What are the main differences between local and global variables? 2. } In this example. Both local and global variables may share the same name in C. 38 . f1(). return0. count=100.} main() { max=10. Type in the following program. The function main() assigns a value to max and the function f1() uses the value of max to control the for loop. printf(“count in f1(): %d\n”. f1() { int count. } main() { count=10. } In main() the reference to count is the global variable. both functions main() and f1() reference that variable max.
For example. use 100. Whole numbers are used when assigning values to integer. float.0. we have to include the semicolon at the end. E. Many different types of constants exist in C. to tell C that the value 100 is a floating point value. The output should look like this: 100 is the value of count 2. c = 0x23. The value 100 is called a constant. main() { int i. Assign an ‘R’ to the char. j=i.2. This makes it easier and more reliable in setting the starting values in your program to know conditions. A variable can also be assigned the value of another variable. Write a program that declares three variables char. Write a program that declares one integer variable called count. int j.5 to the float. Floating point numbers must use a value with a decimal point. i=0.g. 50. int a =10. and d. The following program illustrates this assignment. Give count a value of 100 and use a printf() statement to display the value. A character constant is specified by enclosing the character in single quotes. Since a variable assignment is a statement. } EXERCISE: 1. Variables values can be initialized at the same time as they are declared. f. Assignment of values to variables is simple: variable_name = value. An example of assigning the value 100 to the integer variable count is: count = 100. b = 0. 39 .3 Variable Assignment Up to now we have only discussed how to declare a variable in a program and not really how to assign a value to it. such as ‘M’. and double with variable names of ch.
or yellow (i. color = red. the variable mycolor can be created with the colortype enumeration by: enum color_type mycolor. The general form for creating an enumeration is: enum name {enumeration list} variable(s).yellow} color. in the statement enum color_type {red. The compiler will assign integer values to the enumeration list starting with 0 at the first entry. green is 1 and yellow is 2. Instead of assigning a value 40 . This example illustrates this technique. For example. This declaration is called enumeration. The output should look like this: ch is R f is 50. enum color_type {red. Display the value of these variables to the screen. The list of constants created with an enumeration can be used any place an integer can be used. the name can be used to create additional variables at other points in the program. The variable can also be tested against another one: if (color==fruit) // do something Essentially.yellow} color. Each entry is one greater than the previous one.gree=9. The variable list is an optional item of an enumeration.5 d is 156. the variable color can only be assigned the values red. This default value can be override by specifying a value for a constant.e.007 2. Therefore.green. 9 to green and 10 to yellow.). For example. it is possible to create a list of named integer constants. Enumeration variables may contain only the values that are defined in the enumeration list.4 Enumeration In C. green. enumerations help to document code. in the above example red is 0.and 156.007 to the double. This statement assigns 0 to red.. Once an enumeration is defined.
The format is: typedef old_name new_name.5 typedef The typedef statement is used to define new types by the means of existing types.to a variable. several typedef statements can be used to create many new names for the same original type. Create an enumeration of currency from the lowest to highest denomination 3. in the previous example signed char is still a valid type.PIC16C52. Is the following fragment correct? Why/Why not? enum {PIC16C51. printf(“First PIC was %s\n”.i). Typedefs are typically used for two reasons. Then. the typedef statement should be changed to typedef short int myint. you might want to ensure that only 16-bit integers are used. 41 . 2. Create an enumeration of the PIC17CXX family line. 2. If the program you are writing will be used on machines with 16-bit and 32-bit integer. an enumeration van be used to clarity the meaning of the value.device).PIC16C53} device. For instance. The first is to create portable programs. for(i=0. The program for 16-bit machines would use typedef int myint. main() { smallint i.i<10. device = PIC16C52. To make all integers declared as myint 16-bits.i++) printf(“%d “. before compiling the program for the 32-bit computer. you must remember two key points: A typedef does not deactivate the original name or type. typedef signed char smallint. the following program uses the name smallint for the type signed char. } When using a typedef. The new name can be used to declare variables. EXERCISE: 1.
2. The C compiler will automatically promote a char or short int in an expression to an int when the expression is evaluated. int i = 15. EXERCISE: 1. The mixing of data types is governed by a strict set of conversion rules that tell the compiler how to resolve the differences. If your code contains many variables used to hold a count of some sort. Make a new name for unsigned long called UL. the following is a valid code fragment: char ch = ‘0’. The following algorithm shows the type conversions: IF an operand is a long double 42 . A type promotion is only valid during the evaluation of the expression. float f = 25. typedef int counter. typedef length depth.6. 2. This task is done on an operation by operation basis. depth d.6 type Conversions C allows you to mix different data types together in one expression. The second reason to use typedef statements is to help you document your code. typedef height length. assigns a value to it and displays the value to the screen. you could use the following typefef statement to declare all your counter variables. Now that the automatic type promotions have been completed. the variable itself does not become physically larger.So that all integers declared as myint are 16-bits. For example. Is the following segment of code valid? typedef int height. The first part of the rule set is a type promotion. Someone reading your code would recognize that any variable declared as counter is used as a counter in the program. the C compiler will convert all variables in the expression up to the type of the largest variable. Use this typed in a short program that declares a variable using UL.
(int)f). f = 100. no type conversion takes place. If the resulting value is assigned to a long. If two 8-bit integers are multiplied. float f. The following code fragment shows how to print the integer portion of a floating point number. the result will be an 8-bit value. but will be converted to a double for storage in the variable result. The result of ch*i will be converted to a floating point number then divided by f. ch is promoted to an int. The first operation is the multiplication of ch with i. printf(“%d”. First of all. type is a valid C data type and value is the variable or constant. b = 10. the other will be converted to a float. The number 100 will be printed to the screen after the segment of code is executed. you can specify the type conversions by using the following format: (type) value This is called type casting. The next operation is the division between ch*i and f. Since both of these variables are now integers. 43 . The algorithm specifies that if one of the operand is a float. int a = 250. long c. This causes a temporary change in the variable. Finally. So if you need a long value as the answer.2. then at least one value needs to be initially defined as long or typecast. Instead of relying on the compiler to make the type conversions. the value of the expression ch*i/f is a float. The result will be 196. the result will still be an 8-bit integer as the arithmetic is performed before the result is assigned to the new variable. c = a * b.
static and register. 2. int. The RAM locations are used in that ‘local’ block of code and can/will be used by other blocks of code. main() { char c = 0. int a. The type has already been discussed as char.. the compiler assigns RAM space for the declared variables. e.c = (long) a * b. The result will be 2500 because a was first typecast to a long and therefore a long multiply was done. external. b. There are four storage classes: automatic. b. register 0Eh assigned to C load w with 1 load register assigned to a with w load w with 3 load register assigned to b wit w load w with 5 load register assigned to e with w . int a =1.7 variable Storage Class Every variable and function in C has two attributes – type and storage class. These storage classes have the following C names: auto Auto Variables declared within a function are auto by default. e. etc. b = 3. so { char c. } When a block of code is entered. auto int a. } is the same as { auto char c.
static int count = 4. Static The variable class static defines globally active variables which are initialized to zero. The class is used only to advise the compiler – no function within the CCS C.y. Register The variable class register originates from large system applications where it would be used to reserve high speed memory for frequently used variables. That means its name is visible files other than the one in which it is defines.The extern keyword declares a variable or a function and specifies that it has external linkage. 45 . There is no function within the CCS C. void test() { char x. } The variable count is initialized once.++count).z. unless otherwise defined. and thereafter increments every time the function test is called. printf(“count = %d\n”.
Functions Functions are the basic building blocks of the C language. In this chapter we will discuss how to pass arguments to functions and how to receive an argument from a function. The topics discussed in this chapter are: Passing Arguments to Functions Returning Arguments from Functions Function Prototypes Classic and Modern Function Declarations 46 . All statements must be within functions.
For instance: main() { f1(). For instance. the header file that you included at the top of your program has already informed the compiler about the function. } int f1() { return 1. this program should produce an error or. just like variables. 3. If you are using a standard C function. a warning. the statement in sum() would tell the compiler that the function sum() returns an integer. If you are using one or your functions. The general form is: type function_name(). which are explained in the next section. } main() { f1(). A function prototype not 47 .2 Function Prototypes There are two methods used to inform the compiler what type of value a function returns. One is to use function prototypes.3. we have seen many instances of functions being called from a main program. at the very least. } In reality. there are two ways to correct this error.1 Functions In previous sections. } An error will not be generated because the function f1() is defined before it is called in main(). The other is to reorganize your program like this: int f1() { return 1. The second way to inform the compiler about the return value of a function is the function prototype. The reason is that the function f1() must be declared or defined before it is used.
only gives the return value of the function. printf(“volume: %d\n”.vol). int s3). Is the following program correct? Why/Why not? double myfunc(void) 48 .15) 2. as the size of programs grows from a few lines to many thousands of lines. To show how errors are caught by the compiler. } int volume(int s1. An example of a function prototype is shown in this program. change the above program to send four parameters to the function volume: vol = volume(5. type var3). type var2. The function calculates the volume defined by length. width and height. int s2.7. In the above example.12). However. } Notice that the return uses an expression instead of a constant or variable. The prototype must match the function declaration exactly. the type of each variable can be different. int s2. Prototypes help the programmer to identify bugs in the program by reporting any illegal type conversions between the arguments passed to a function and the function declaration. EXERCISE: 1. The importance of prototypes may not be apparent with the small programs that we have been doing up to now.7. It also reports if the number of arguments sent to a function is not the same as specified in the function declaration. but also declares the number and type of arguments that the function accepts. int s3) { return s1*s2*s3. int volume(int s1. vol = volume(5. void main() { int vol. the importance of prototypes in debugging errors is evident.12. The general format for a function prototype is shown here: type function_name(type var1.
For example. These special variables are defined as formal parameters. This function would be declared as such: void nothing (void) An example of this could be: double pi(void) { return 3. } 3. The parameters are declared between the parenthesis that follow the function’s name.4 Using Function Arguments A function argument is a value that is passed to the function when the function is called.2)). } 3. //calling the value of pi printf(“%d\n”. } //defining the function //with nothing passed in //but with pi returned main() { double pi_val. The number of arguments that a function can accept is compiler dependent.pi_val). the function below calculates and prints the sum of two integers that are sent to the function when it is called. } 49 . but the ANSI C standard specifies that a function must be able to accept at least 31 arguments. } double myfunc(double num) { return num/2.3 Void One exception is when a function does not have any parameters passed in or out.myfunc(10.0. When a function is defined. int b) { printf(“%d\n”.void main() { printf(“%f\n”. special variables must be declared to receive parameters.1415926536. pi_val = pi(). void sum(int a.a+b). C allows from zero to several arguments to be passed to functions.
The second method is called “call by reference”. we will only use the call by value method when we pass arguments to a function. Any changes made to the formal parameter do not affect the original value of the calling routine.15. the compiler will copy the value of each argument into the variables a and b.An example of how the function would be called in a program is: void sum(int a. 2. EXERCISE: 1. } //This is a function prototype main() { print_it(156. } 50 .7). } void sum(int a. } When sum() is called. This means that changes can be made to the variable by using the formal parameter.25) are called arguments and the variables a and b are the formal parameters. Inside this function. int b) { printf(“%d\n”. main() { sum(1. Functions can pass arguments in two ways. Write a function that takes an integer argument and prints the value to the screen. sum(100. The first way is called “call by value”. the address of the argument is copied into the formal parameter of the function.num).6).10).100. the formal parameter is used to access the actual variable in the calling routine.6.25). For now. It is important to remember that the values passed to the function (1. This method copies the value of an argument into the formal parameter of the function.10. int b). In this method. We will discuss this further in the chapter on pointers.a+b). What is wrong with this program? print_it(int num) { printf(“%d\n”. sum(15.
but could be used in a printf() statement. result = sqrt(16. variable or any valid C expression that has the same data type as the return value. If your function does not return a value. Where variable_name is a constant. } Where type specifies the data type of the return value of the function. So. the function is put on the right side of an equals (=) sign.3. } This program calls the function sqrt() which return a floating point number. printf(“%f\n”. The general format for telling the compiler that a function returns a value is: type function_name(formal parameters) { <statements> return value. A function can return any data type except an array. The return value does not necessarily need to be used in an assignment statement. Typically. The following example shows both 51 . This explicitly tells the compiler that the function does not return a value. This example shows a typical usage for a function that has a return value. If no data type is specified. It is important that you match the data type of the return value of the function to the data type of the variable to which it will be assigned. how do you return a value from a function? The general form is: return variable_name. #include <math. This number is assigned to the variable result.h is included because it contains information about s that is used by the compiler.0). then the C compiler assumes that the function is returning an integer (int). The same goes for the arguments that you send to a function.5 Using Functions to Return Values Any function in C can return a value to the calling routine.h> main() { double result.result). the ANSI C standard specifies that the function should return void. Notice that the header file math.
127). } 52 . int b). main() { int num.types of functions. What a function that accepts an integer number between 1 and 100 and returns the square of the number. printf(“%d\n”. is when a return statement is encountered. EXERCISE: 1. result = f1().result). if it is not the value is lost. } func() { return 6. printf(“%d\n”. } One important thing to do note.num). 2. sum(int a. What is wrong with this function? main() { double result. result = a + b. } int f1() { return 60. Any statements after the return will not be executed. num = sum(5. printf(“%f\n”. } sum(int a. int b) { int result. The return value of a function is not required to be assigned to a variable or to be used in an expression. however. num = func(). the function returns immediately to the calling routine. return result. num). func().
is shown below: type function_name(var1. { <statements> } Notice that the declaration is divided into two parts. What is a function prototype and what are the benefits of using it? 2. don’t worry..3. The modern form. The purpose is to maintain compatibility with older C programs of which there are literally billions of lines of C code. Convert this program using a classical form for the function declarations to the modern form.. type varn. Going forward. This form. which we have been using in previous examples.w) int 1. both the data types and formal parameter names are specified between the parenthesis.var2.15)). Only the names of parameters are included inside the parenthesis. void main(void) { printf(“area = %d\n”.type var n) In this type of function declaration. } area(1. type var2.. area(10. now called the classic form. you should use the modern form when writing code..6 Classic and Modern Function Declarations The original version of C used a different method of formal parameter declaration. your C compiler should be able to handle it. is given by: type function_name(type var 1. EXERCISE: 1.w { return 1*w. } 53 . . The ANSI C standard allows for both types of function declarations... Outside of the parenthesis the data types and formal parameter names are specified.varn) type var1. If you see the classic form in a piece of code.
... 54 . lcd_putc(“b”). } lcd_putc(“abcd”). The CCS C compiler handles this situation in a non-standard manner. lcd_putc(“c”).. Is the same as: lcd_putc(“a”). For example: void lcd_putc(char c) { .3. constant strings cannot be passed to functions in the ordinary manner. lcd_putc(“d”).7 Passing Constant Strings Because the PICmicro®MCU has limitations on ROM access. If a constant string is passed to a function that allows only a character parameter then the function is called for every character in the string.
the expression plays an important role. C operators follow the rules of algebra and should look familiar. The main reason is that C defines more operators than most other languages.C Operators In C. An expression is a combination of operators and operands. In most cases. In this chapter we will discuss many different types of operators including: Arithmetic Relational Logical Bitwise Increment and Decrement Precedence of Operators 55 .
the following expression is a valid C statement. division and modulus.subtraction . subtraction. For example. int a.reversing the sign of a Arithmetic operators can be used with any combination of constants and/or variables. * and / operators may be used with any data type. The – operator can be used two ways. the first being a subtraction operator. The second way is used to reverse the sign of a number. this operator has no meaning when applied to a floating point number..4.. 56 .b. a = a – b a = -a . can be used only with integers. C also gives you some shortcuts when using arithmetic operators. + * / % addition subtraction multiplication division modulus The +. result = count –163. One of the previous examples. Therefore.1 Arithmetic Operators The C language defines five arithmetic operators for addition. multiplication. *. This method can be used with. The modulus operator. -. The modulus operator gives the remainder of an integer division..c. %. can also be written a -=b. a = a – b. The example shows various ways of implementing this method. and / operators. The following example illustrates the two uses of the – sign.
becomes 0007: 0008: while a==b.W 0E .load b .load b .2 00D .W 0E.2 Relational Operators The relational operators in C compare two values and return a true of false result based on the comparison. in the second.a = b + c. EXERCISE: 1.load c .W 0E 10.save in a MOVF SUBWF BTFSC GOTO 0F.add c to b .load b .W 0E . a is tested to check if it is the same as b.c. 5/4. Write a program that finds the remainder of 5/5. and 5/1. becomes 0007: MOVF 0008: ADDWF 0009: MOVWF a = b . 4. One simple fault is the use of = or ==.F 03.W 10.W 0E.load b . a is made the same as b. 2.yes – so bypass In the first instance.subtract from a . The relational operators are following: > >= < <= == greater than greater than or equal to less than less than or equal to equal to 57 . at times.save in a 0F. 5/2. Write a program that calculates the number of seconds in a year. becomes 0007: MOVF 0008: MOVWF 0009: MOVF 000A: SUBWF 0F. that looking at the assembler listing (.F . a = b. becomes 0007: 0008: 0009: 000A: MOVF MOVWF 0F.test if zero .save in a .LST) points to the C error.subtract from a The importance of understanding assembler becomes apparent when dealing with problems – I have found. 5/3.
count == 0 58 . var > 15. False is always defined as zero. OR. Rewrite the following expressions using any combination of relational and logical operators. When is this expression true or false? Why? count >= 35 4. and NOT. EXERCISE: 1. the result is 0 (false) var != 15. the result is 1 (true) EXERCISE: 1. these operators return either a 0 for false or 1 for true. An example of linking these operators together is: count>max || !(max==57) && var>=0 Another part of C that uses the relational and logical operators is the program control statements that we will cover in the next chapter. if var is greater or less than 15.3 Logical Operators The logical operators support the basic logical operations AND. Rewrite the following expression using a different relational operator. if var is less than or equal to 15. The following examples show some expressions with relational operators. even though C defines true as any non-zero value.. Again. count != 0 2.!= not equal to One thing to note about relational operators is that the result of a comparison is always a 0 or 1.
An example of all the bitwise operators is shown below. AND 00000101 (5) 00000110 (6) ---------------00000100 (4) OR 00000101 (5) 00000110 (6) ---------------00000111 (7) & | 59 . Shift operations are almost always faster than the equivalent arithmetic operation due to the way a CPU works. Since C does not explicitly provide for an exclusive OR function. p 0 0 1 1 q 0 1 0 1 XOR 0 1 1 0 4. and a zero is inserted on the right side. The bit that is shifted off the end of the variable is lost. These bitwise operators can be used only on integer and character data types. The result of using any of these operators is a bitwise operation of the operands. The unique thing to note about using left and right shifts is that a left shift is equivalent to multiplying a number by 2 and a right shift is equivalent to dividing a number by 2..4 Bitwise Operators C contains six special operators which perform bit-by-bit operations on numbers. Each left shift causes all bits to shift one bit position to the left. write an XOR function based on the following truth table.
load b .F .F 1F 0E.and function with c .inclusive or with c . becomes 0007: MOVF 0008: MOVWF 0009: RRF 000A: RRF 000B: RRF 000C: MOVLW 000D: ANDWF j = ~a.three times .W 0E 0E.rotate contents .W 0E .load b .W 0E .^. Write a program that inverts only the MSB of a signed char.W 0E 0E.compliment j EXERCISE: 1.F . 2.save in j . becomes 0007: MOVF 0008: ANDWF 0009: MOVWF a = b >> 3. Write a program that displays the binary representation of a number with the data type of char.F 0E.save in a 0F. 60 . a = b | c.W 10.load b .save in a .right .of register for a 0F.F 0E.apply mask to contents .W 10. becomes 0007: MOVF 0008: IORWF 0009: MOVWF a = b & c.save in a 0F.load b . becomes 0007: MOVF 0008: MOVWF 0009: COMF 0F.
W .load a in w .a = 5 .W 0F. 000C: 000D: 000E: 03 0F . or or ++a.value of a loaded into w 0E. int j. the variable is incremented then that value is used in an expression.F 0F.load value of a into w . as the following code will be generated: MOVF INCF MOVWF value 0E.F .F 0E .W 0E . void main(void) { int i. a = 3.j.previous value reloaded overwriting incremented The following example illustrates the two uses. 0009: 000A: 000B: j = a++. --a.register assigned to a INCF MOVF MOVWF 0F. a--.5 Increment and Decrement Operators How would you increment or decrement a variable by one? Probably one of two statements pops into your mind.j = 4 NOTE: Do not use the format a = a++. 0007: MOVLW 0008: MOVWF j = ++a. the value of the variable is used in the expression then incremented. When the ++ or – follows the variable. Again. Maybe following: a = a+1. for increment for decrement When the ++ or – sign precedes the variable.store w in j MOVF INCF MOVWF 0F. the makers of C have come up with a shorthand notation for increment or decrement of a number. 61 . or a = a-1. The general formats are: a++.4.value in a incremented 0E.a = 4 .
i = 10. j = %d\n”. a. printf(“a=%d.6 Precedence of Operators 62 . i = 10..j). a = a+1. } The first printf() statement will print an 11 for i and a 10 for j. j = %d\n”. j = ++i. b = b-1. What are the values of a and b after this segment of code finishes executing? a = 0. a++.j). 4. Rewrite the assignment operations in this program to increment or decrement statements. printf(“i = %d. b = a. b = -a + ++b. b++. a = 1.b).i.i. void main(void) { int a. b = 0. j = i++. b=%d\n”. printf(“i = %d. } 2. a = ++a + b++.
Priority Operator 1 () ++ -2 sizeof & * + . a = 6 8+3b++. but don’t worry.b=0.~ ! ++ . For instance.Precedence refers to the order in which operators are processed by the C compiler. if the expression a+b*c was encountered in your program. which operation would happen first? Addition or multiplication? The C language maintains precedence for all operators.. 63 . The second line is composed entirely of unary operators such as increment and decrement. they will be covered later. The following shows the precedence from highest to lowest. b += -a*2+3*4.. We will also cover how to execute loops. You will also learn how relational and logical operators are used with these control statements.
The block of code associated with the if statement is executed based upon the outcome of a condition. This tells the compiler that if the expression is true. If the expression is false.5. the statement is executed. the program continues without executing the statement. print parameters to use } or Other operator comparisons used in the if statement are: x == y x != y x > y x equals y x is not equal to y x great than y 65 . The if statement can also be used to control the execution of blocks of code. any not-zero value is true and any zero value is false. execute the code between the barces. The if statement evaluates the expression which was a result of true or false. This example shows how relational operators are used with program control statements. The simplest format is: if (expression) statement. statement..” after the expression The expression can be any valid C expression. } if(TestMode ==1) { . NOTE: no “.1 if Statement The if statement is a conditional statement.. The general format is: if (expression) { . If the expression is true. Again. A simple example of an if is: if(num>0) printf(“The number is positive\n”). An example of the if and a block of code is: if (count <0 ) { count =0. } The braces { and } are used to enclose the block of code. printf(“Count down\n”). .
i). Unlike an if statement the ? operator returns a value. then it is assigned 10. The conditional_test is evaluated prior to each execution of the loop. If you have a statement or set of statements that needs to be repeated. i<10. increment ) The initialization section is used to give an initial value to the loop counter variable. This section of the for loop is executed only once. If the conditional_test is true the loop is executed. At this point. is set to zero. i++) printf(“%d “. the loop counter variable is incremented. statement is executed. i ? j=0 : j=1. conditional_test . Normally this section tests the loop counter variable for a true or false condition. the expression j=0 will be evaluated.j. for(i=0.int i. The increment section of the for loop normally increments the loop counter variable. such as BASIC or Pascal. a for loop easily implements this. If the conditional_test is false the loop exits and the program proceeds. If this statement is true the printf(“%d “.i). printf(“done”). i. The most common form of a for loop is: for( initialization . The basic format of a for loop is similar to that of other languages.4 for Loop One of the three loop statements that C provides is the for loop. The program works like this: First the loop counter variable. or j=i?0:1. i = j. i will be incremented unless it is 20 or higher. Each time after the printf(“%d “. Here is an example of a for loop: void main(void) { int i. This whole process continues until the expression i<10 becomes false. statement is executed. next the expression i<10 is evaluated. the for loop is exited and the printf(“done”).i). 69 . Note that this counter variable must be declared before the for loop can use it. is executed. 5. For example: i = (i<20) ? i+1 : 10. } This program will print the numbers 0 – 9 on the screen. Since i is 1 or non-zero.
} The expression is any valid C expression. While an expression is true. num++) 2. count<50. exit loop .h!=10. The value of expression is checked 70 .h++) 0007: CLRF 0E 0008: MOVLW 0A 0009: SUBWF 0E. the conditional test is performed at the start of each iteration of the loop. the while loop repeats a statement or block of code. num. Therefore. .W 000A: BTFSC 03. Here are some variations on the for loop: for (num=100. Hence. Here is the general format: while (expression) statement. or while (expression) { statement. for (h=0.and test for zero . 5.As previous stated. count++) Convert an example in to assembler to see what happens: int h.subtract from h . count+=5) for (count=1.2 000B: GOTO 00F a++. ) for(num=1. 000C: INCF 0F. You are not restricted just to incrementing the counter variable. num=num-1) for (count=0.F 000E: GOTO 008 .5 while Loop Another loop in C is the while loop.a.clear h .load 10 .increment h . count<10 && error==false.F 000D: INCF 0E.if i=10.increment a . num>0. What do the following for() statements do? for(i=1. the for loop will never be executed. Write a program that displays all the factors of a number. the name while. if the test is false to start off with.i++) for( . .loop again EXERCISE: 1.
ch=getch(). the program will continue to get another character from the keyboard. and prints them to the screen. What do the following while statements do? a. Once a q is received. When a carriage return is encountered. exit the program. printf(“Give me a q\n”). while(i<10) { printf(“%d “.H> #use RS232 (Baud=9600. Here is an example of a while loop: #include <16C74. while(ch!=’q’) ch=getch(). i++.6 do-while Loop The final loop in C is the do loop.prior to each iteration of the statement or block of code. An example of a 71 . } You will notice that the first statement gets a character from the keyboard. printf(“Got a q!\n”). } b. Then the expression is evaluated. As long as the value of ch is not a q. while(1) printf(“%d “. 5. 2. Here we combine the do and while as such: do { statements } while(expression) In this case the statements are always executed before expression is evaluated. EXERCISE: 1. This means that if expression is false the statement or block of code does not get executed. The expression may be any valid C expression. RCV=pin_c7) void main(void) { char ch. the printf is executed and the program ends.i++).i). Write a program that gets characters from the keyboard using the statement ch=getch(). xmit-pin_c6.
} This program is equivalent to the example we gave in Section 5. printf(“Got a q!\n”).j++) printf(“%d “. i++.5 using a do-while loop. if the letter ‘D’ is entered (ASCII value of 68). The ANSI C standard specifies that compilers must have at least 15 levels of nesting.7 Nesting Program Control Statements When the body of a loop contains another loop.8 break Statement 72 . use the ASCII value to print an equal number of periods to the screen.5 using a do-while loop: 2. 5.i*10+j). while(i < 10) { for(j=0. EXERCISE: 1. the program ends. Rewrite Exercise 2 in Section 5. When a ‘Q’ is entered. your program would print 68 periods to the screen. Write a program that gets a character from the keyboard (ch=getch(). Any of C’s loops or other control statements can be nested inside each other. RCV=pin_c7) void main(void) { char ch. xmit-pin_c6. An example of a nested for loop is shown here: i = 0.j<10. Each time a character is read.5 EXERCISE: 1. For example. } while(ch != ‘q’). the second loop is said to be nested inside the first loop. do { ch = getch().). } This routine will print the numbers 00 – 99 on the screen. 5. Rewrite both a and b of Exercise 1 in Section 5.do-while loop is shown: #include <16C74.H> #use RS232 (Baud=9600.
9 continue Statement Let’s assume that when a certain condition occurs in your loop. What does this loop do? for(i=0. you want to skip to the end of the loop without exiting the loop. For example: void main(void) { int i.i++) { printf(“Microchip® is great!”). #include <16C74.1. Write three programs.h 5.H> void main(void) { int i. For example. kbhit() returns 1 when a key is pressed and a 0 otherwise.i). When the program encounters this statement. if(getch()==’q’) break. } 2. kbhit () requires the header file conio.i++) { 73 . The break statement works with all C loops. that count forever but exit when a key is hit.i++) { printf(“%d “. each using one of C’s loops. When a break statement is encountered in a loop. if(i==15) break. } } This program will print the number 0 – 15 on the screen. for(i=0. The break statement bypasses normal termination from an expression. for(i=0. EXERCISE: 1. the program jumps to the next statement after the loop. it will skip all statements between the continue and the test condition of the loop. You can use the function kbhit() to detect when a key is pressed.i<50.The break statement allows you to exit any loop from any point within the body. C has provided you with the continue statement.i<100.
continue. break. a continue will cause the increment part of the loop to be executed and then the conditional test is evaluated. case constant2: statement(s). A switch statement is equivalent to multiple if-else statements. The general form for a switch statement is: switch (variable) { case constant1: statement(s). The default is optional. the program skips the printf() and evaluates the expression i <100 after increasing i. A continue will cause the program to go directly to the test condition for while and do-while loops. Each time the continue is reached. for(. case constantN: statement(s).i). the body of statements associated with that constant is executed until a break is encountered. default: statement(s). 5. } } This loop will never execute the printf() statement.10 switch Statement The if statement is good for selecting between a couple of alternatives. printf(“%d “. } The variable is successively tested against a list of integer or character constants. if(ch==’x’) 74 .. the statements associated with default are executed. When a match is found. break. but becomes very cumbersome when many alternatives exist. An example of a switch is: main() { char ch. If no match is found. break. Again C comes through by providing you with a switch statement.) { ch = getch().
switch(ch) { case ‘0’: printf(“Sunday\n”). break. break. break. break. break. case ‘2’: printf(“Tuesday\n”). byte cp1_sw_get() //characters per line { byte cp1. break. case ‘4’: printf(“Thursday\n”). then separated from the other bits and used to return the appropriate value to the calling routine. Another example used to set the number of characters per line on a LCD display is as follows. case 0x20: cp1 = 20. break. } } } This example will read a number between 1 and 7. 75 . break. default: cp1 = 40. case 0x10: cp1 = 16. case ‘3’: printf(“Wednesday\n”).return 0. cp1=portd & 0b01110000. the message Invalid entry will be printed. case ‘1’: printf(“Monday\n”). break. break. If the number is outside of this range. case ‘6’: printf(“Saturday\n”). The DIP switch and the characters per line settings read. //mask unwanted bits switch(cp1) //now act on value decoded { case 0x00: cp1 = 8. default: printf(“Invalid entry\n”). case ‘5’: printf(“Friday\n”). Values within the range will be converted into the day of the week. break. break. case 0x30: cp1 = 28.
printf(“D = Division\n”). . case ‘A’: printf(“\t\t%d”. Also switches can be nested. as long as the inner and outer switches do not have any conflicts with values. ch=getch(). Here is an example of nested switches. } //send back value to calling routine The ANSI Standard states that a C compiler must support at least 257 case statements. printf(“Enter Choice:\n”). printf(“S = Subtraction\n”). printf(“A = Addition\n”). No two case statements in the same switches can have the same values.a+b). An ANSI compiler must provide at least 15 levels of nesting for switch statements. void main(void) { int a=6. An example is provided to illustrate this. switch (ch) { case ‘S’: b=-b. break. char ch. } break. 76 . break. case 1: printf(“b is true”). This means that two case statements can share the same portion of code. The break statement within the switch statement is also optional. printf(“M = Multiplication\n”).} return(cp1). switch (a) { case 1: switch (b) { case 0: printf(“b is false”). case 2: .b=3.
The phrases to describe coins are: penny.i<10.11 null Statement (. break. In this example. The null statement satisfies the syntax in those cases. What is wrong with this segment of code? float f. switch(f) { case 10. for.break. } } EXERCISE: 1. which introduces a one cycle delay.a/b). the loop expression of the for line[i++]=0 initializes the first 10 elements of line to 0. since no additional commands are required. for (i=0. break. dime. It may appear wherever a statement is expected. quarter. case ‘D’: printf(“\t\t%d”.i++) . default: printf(“\t\tSay what?”).12 return Statement 77 .). 3. nickel. Statements such as do.) The null statement is a statement containing only a semicolon (. and dollar. The statement body is a null. 5. 2.05: . if and while require that an executable statement appears as the statement body. Use a switch statement to print out the value of a coin. The value of the coin is held in the variable coin.a*b). What are the advantages of using a switch statement over many if-else statements? 5. Nothing happens when the null statement is executed – unlike the NOP in assembler. case ‘M’: printf(“\t\t%d”. .
If a returned value is not required. { c++. GetValue(c) int c. x = GetValue(). GetNothing(). } 78 . } void GetNothing(c) int c. declare the function to have a void return type.The return statement terminates the execution of a function and returns control to the call routine. A value can be returned to the calling function if required but if one is omitted. return c. } main() { int x. control is still passed back to the calling function after execution of the last line of code. { c++. If no return is included in the called function. the returned value is then undefined. return.
Topics that will be discussed: Arrays Strings One-dimensional Arrays Multidimensional Arrays Initialization 79 .Array and Strings In this chapter we will discuss arrays and strings. An array is simply a list of related variables of the same data type. and is also known as the most common one-dimensional array. A string is defined as a null terminated character array.
. 000C: MOVLW 0E 000D: ADDWF 18. C defines the first element to be at an index of 0. say I want to index the 25th element of the array height and assign a value of 60.F 0012: GOTO 008 array i will look like this in memory.0 000B: GOTO 013 num[i] = i. height[24] = 60. If the array has 50 elements.W 000A: BTFSC 03. Using the above example.1 One-Dimensional Arrays An array is a list of variables that are all of the same type and can be referenced through the same name.if so then stop routine .now test if < 10 . When an array is declared. For instance.i++) 0007: CLRF 18 0008: MOVLW 0A 0009: SUBWF 18. int num[10].6. The general form for declaring one-dimensional arrays is: type var_name [size]. If the following segment of code is executed. the last element is at an index of 49. element 1 0 2 1 3 2 4 3 5 4 80 . if we want an array of 50 elements we would use this statement. An individual variable in the array is called an array element. var_name is the name of the array.. for(i=0. This is a simple way to handle groups of related data. int i. The first element is at the lowest address. and size specifies how many elements are in the array.W 0010: MOVWF 00 0011: INCF 18.load start of num area 6 5 7 6 8 7 9 8 10 9 .W 000E: MOVWF 04 000F: MOVF 18. C store one-dimensional arrays in contiguous memory locations.clear i .i<10. The following example shows how to do this. Where type is a valid C data type. int height[50].
i++) b[i] = a[i]. EXERCISE: 1. However.Any array element can be used anywhere you would use a variable or constant.i++) count[i]=getch().i<20.i++) num[i] = i * i. } What happens if you have an array with ten elements and you accidentally write to the eleventh element? C has no bounds checking for array indexes. Write a program that first reads 10 characters from the keyboard using getch(). C does not allow you to assign the value of one array to another simply by using an assignment like: int a[10]. int i. 81 . you must copy each individual element from the first array into the second array.num[i]). this will generally have disastrous results. then print out all elements.i<10. for(i=0.i<100. It simply assigns the square of the index to the array element. char count[10]. . you may read or write to an element not declared in the array. Here is another example program. What is wrong with the following segment of code? int i. for(i=0.b[10]. for(i=0. Often this will cause the program to crash and sometimes even the computer to crash as well. for(i=0. . If you want to copy the contents of one array into another. 2. a=b. Therefore. The program will report if any of these characters match. The following example shows how to copy the array a[] into b[] assuming that each array has 20 elements.i++) printf(“%d “.i<10.h> void main(void) { int num[10]. The above example is incorrect. #include <16c74.
EXERCISE: 1. int i. } Here we see that the string can be printed in two ways: as an array of characters using the %c or as a string using the %s. Write a program that reads a string of characters from the screen and prints them in reverse order on the screen.2 Strings The most common one-dimensional array is the string. Instead. } 2. printf(str). it supports strings using one-dimensional arrays of characters. You must make sure that the length of str is greater than or equal to the number of characters read from the keyboard and the null (null = \0). #include <string. C does not have a built in string data type.str[i]). gets(str). All string constants are automatically null terminated by the C compiler. for(i=0. The string of characters that was read will be stored in the declared array str.h> void main(void) { char str[10].6. void main(void) { char star[80]. printf(“Enter a string (<80 chars):\n”).str[i]. A string is defined as a 0. If every string must be terminated by a null. printf(“\n%s”.i++) printf(“%c”. “Motoroloa who?”). What is wrong with this program? The function strcpy() copies the second argument into the first argument.str). 82 . then when what string is declared you must add an extra element. Since we are using strings. This extra element will hold the null. strcpy(str. Let’s illustrate how to use the function gets() with an example. How can you input a string into your program using the keyboard? The function gets(str) will read characters from the keyboard until a carriage return is encountered.
Therefore.i++) { for(j=0. 83 .j<4. For simplicity. for(i=0. However. The following figure shows a graphical representation of a 5x5 array. A twodimensional array is best represented by a row.i<5. when using the multidimensional arrays the number of variables needed to access each individual element increases. For example. then displays the contents of the array in row/column format. to create an integer array called number with 5x5 elements.j++) array[i][3]=i*j.uses 25 ram location Additional dimensions can be added simply by attaching another set of brackets.j++) printf(“%d “. for(i=0. Two-dimensional arrays are used just like one-dimensional arrays.3 Multidimensional Arrays C is not limited to one-dimensional arrays. arrays of 100 or 10x10 are not possible. twodimensional arrays are accessed a rot at a time. EXERCISE: 1. } } The output of this program should be look like this: 0 0 0 0 0 0 1 2 3 4 0 2 4 6 8 0 3 6 9 12 As you can see.i<5. For example.array[i][j]). int i. column format.j. from left to right. printf(“\n”). .j<4. You can create two or more dimensions. we will discuss only two-dimensional arrays. Print the array to the screen. two of 50 arrays would fit. you would use: int number[5][5]. Write a program that declares a 3x3x3 array and loads it with the numbers 1 to 27. void main(void) { int array[5][4]. the following program loads a 5x4 array with the product of the indices. Due to the PICmicro®MCU’s memory map.i++) for(j=0.6.
4.5}.9}. int num[3][3]={ 1. 84 . The value_list is a comma separated list of constants that are compatible with the type of array. 15. EXERCISE: 1.2.3. ‘b’. Multidimensional arrays are initialized in the same way as one-dimensional arrays. 6.0.8.3. The following example shows a 3x3 array initialization. as shown here char name [5] = “John”. A string (character array) can be initialized in two ways. Is this declaration correct? int count[3] = 10. 4.6.4 Initializing Arrays So far you have seen only individual array elements having values assigned. You may have noticed that no curly braces enclosed the string. 7. int i[5] = {1. C provides a method in which you can assign an initial value to an array just like you would for a variable. The second method is to use a quoted string. It is probably easier to simulate a row/column format when using twodimensional arrays. the second constant in the second element and so on. First.7. They are not used in this type of initialization because strings in C must end with a null. Using the program from Exercise 1. print out the sum of each row and column. The general form for one-dimensional arrays is shown here: type array_name[size] = {value_list}.2. The first constant will be placed in the first element. you may make a list of each individual character as such: char str[3] (‘a’. The element i[0] will have a value of 1 and the element i[4] will have a value of 5. 5. The following example shows a 5-element integer array initialization. ‘c’).2.5.6. The compiler automatically appends a null at the end of “John”.
use the following statement. This statement specifies that the array name contains 10 names. To obtain an index into the table. Allow the user to enter a single digit number and then your program will display the respective word. Create a 9x3 array to hold the information for numbers 1-9. //the library for string //define string array 85 .6 string functions Strings can be manipulated in a number of ways within a program. . 6. #include <string. this allows a constant string to be inputted into RAM. to print the fifth name from this array. 6. Then print out the number and its square and cube. The same follows for arrays with greater than two dimensions.names[4]). EXERCISE: 1. specify only the first index. One example is copying from a source to a destination via the strcpy command. if the array animals was declared as such: char animals[5][4][80]. For instance. to access the second string in the third list. subtract ‘0’ from the character entered.2. specify animals[2][1].h> functions char string[10]. printf (“%s”. up to 40 characters long (including null). what does the following declaration define? char name[10][40]. For example. To access a string from this table.. Write a program that has a lookup table for the square and the cube of a number. They can be declared and initialized like any other array. &num). To access a specific string. Each row should have the number. the square of the number and the cube of the number. Ask the user for a number using the statement scanf(“%d”. The way in which you use the array is somewhat different from other arrays. For instance. For example. you would use the first two dimensions.5 Arrays of Strings Arrays of strings are very common in C. Write a program that create a string table containing the words for the numbers 0 through 9.
”def”).strlen(s1)).. strcat(s1.s2). strcpy(s1. //will print 6 //will print abcdef if(strcmp(s1. “Hi There”).s2)!=0) printf(“no match”). strlen(“hi”) is not valid.strcpy (string. s2[10].”abc”). printf(“%u”.//setup characters into string Note that pointers to ROM are not valid in the PICmicro®MCU so you can not pass a constant string to one of these functions.. strcpy(s2. More examples: char s1[10]. 86 . printf(s1).
Some of the topics we will cover in this chapter are: Pointer Basics Pointers and Arrays Passing Pointers to Functions 87 . A pointer is basically the address of an object. the pointer.Pointers The chapter covers one of the most important and most troublesome feature of C.
88 . For example. //more than 1 byte may be assigned to a b=6.b. This line can print the value at the address pointed to by a. the a points to b. This process of referencing a value through a pointer is called indirection. #include <16c74.h> void main(void) { int *a. The type of a pointer is one of the valid C data types. The next statement assigns the value of 6 to b. a=&b. This means only location 0255 can be pointed to. printf(“%d”. It specifies the type of variables to which var_name can point. which is an integer. A graphical example is shown here. } NOTE: By default. Then the address of b (&b) is assigned to the pointer variable a. int *ptr. use the following directive: #device *=16 Be aware more ROM code will be generated since 16-bit arithmetic is less efficient. For example.*a). The address of a variable can be accessed by preceding the variable with the & operator. Finally. The two special operators that are associated with pointers are the * and the &. the following statement creates a pointer to an integer. This tells the compiler that var_name is a pointer variable. If b is a variable at location 100 in memory. The * operator returns the value stored at the address pointed to by the variable.1 Introduction to Pointers A pointer is a memory location (variable) that holds the address of another memory location. which is a pointer to an integer and b. the value of b is displayed to the screen by using the * operator with the pointer variable a. This line can be read as assign a the address of b. You may have noticed that var_name is preceded by an asterisk *. then a would contain the value 100. the compiler uses one byte for pointers. The first statement declares two variables: a. The general form to declare a pointer variable is: type *var_name. if a pointer variable a contains the address of variable b.7. For example. To select 16 bit pointers. For parts with larger memory a two-byte (16-bit) pointer may need to be used.
Obviously. EXERCISE: 1.h> void main(void) { int *a. #include <16c74. 89 . p would contain the value 104 after the increment assuming that floating point numbers are four bytes long. the use of a pointer in the previous two examples is not necessary but it illustrates the usage of pointers. ++. i is 3 &i is 100 (the location of i). --. } In this program. Only integer quantities may be added or subtracted from pointer variables. Print the numbers using a pointer. a = &b. let’s restructure the previous program in the following manner. However. we first assign the address of variable b to a. after the statement. If we assume that the pointer variable p contains the address 100.b. k. pointers may be treated like other variables.2 Restrictions to Pointers In general. In addition to the * and & operators.Address: Variable: Content: 100 i 3 102 j 5 104 k -1 106 ptr 102 int i. As ptr contains the value 102. The line *a=6. If p had been a float pointer. When a pointer variable is incremented. printf(“%d”. int *ptr. it points to the next memory location. Write a program with a for loop that counts from 0 to 9 and displays the numbers on the screen. executes. For instance. j. *a=6. can be read as assign the value 6 to the memory location pointed to by a. p will have a value of 102 assuming that integers are two bytes long. then we assign a value to b by using a. there are a few rules and exceptions that you must understand. 7. there are only four other operators that can be applied to pointer variables: +. . -. *ptr is 5 (the content of address 102) It is also possible to assign a value to a memory location by using a pointer.b). Initially. The only pointer arithmetic that appears as expected is for the char type. because characters are only one byte long. p++.
For example. This is valid without the const puts the data into ROM. 90 . This is due to the precedence of * versus ++. the following statement: int *p. You must be careful when incrementing or decrementing the object pointed to by a pointer. to or from a pointer.i. Cause p to point to the 200th memory location past the one to which p was previously pointing. Pointers may also be used in relational operations. the following is illegal: char const name[5] = “JOHN”. Print the value of each pointer variable using the %p. It is possible to increment or decrement either the pointer itself or the object to which it points. then increments p. int *ip. To increment the object that is pointed to by a pointer. What do you think the following statement will do if the value of ptr is 1 before the statement is executed? *p++. EXERCISE: 1.You can add or subtract any integer value you wish. p = p+200. they both point to the same object. ptr=&name[0]. The parenthesis cause the value that is pointed to by p to be incremented. This statement gets the value pointed to by p. Declare the following variables and assign the address of the variable to the pointer variable. they only make sense if the pointers relate to each other. Pointers cannot be created to ROM. For example. use the following statement: (*p)++. What are the sizes of each of the data types on your machine? char *cp.e. . . However. i.ch. Then increment each pointer and print out the value of the pointer variable again.
int a[5]={1. If you use an array name without an index. void main(void) { int *p.i++) printf(“%d”. The following program is valid. Since an array name without an index is a pointer. only a pointer to the first element is passed.f. 7. It is this relationship between the two that makes the power of C even more apparent. You will notice that in the printf() statement we use *(p+i).2. you are actually using a pointer to the beginning of the array. This would allow you to access the array using pointer arithmetic.i++) printf(“%d”.i<5. int a[5]={1. What is actually passed to the function.i<5. void main(void) { int *p. What is wrong with this fragment? int *p. 91 . we used the function gets(). p=a. you can assign that value to another pointer.2.float *fp. in which we passed only the name of the string. double *dp. In the last chapter. 2. p = p/2. p=a. You may be surprised that you can also index a pointer as if it were an array.5}.i. p = &i.3 Pointers and Arrays In C.p[i]). where i is the index of the array.4. is a pointer to the first element in the string.*(p+i)). for(i=0. pointers and arrays are closely related and are sometimes interchangeable. For instance.5}. } This is a perfectly valid C program.i. Important note: when an array is passed to a function.d.3. for(i=0. they cannot be created for use with constant arrays or structures.i.3.4.
p++.load into array pointer .load 4 .load 3 . Therefore. .pointer . int array[8].load 4 . this statement would be invalid for the previous program.load array position .. . 92 . .load start of array . The following examples show the problem – the second version does not mix pointers and arrays: int *p. it is invalid to increment the pointer.W 04 03 00 04 10 .W 04 03 00 10 04 04 00 .point to it .add to array start position . p=array. int array[8]. Since pointers to arrays point only to the first element or base of the string. p=array. 000D: MOVLW 000E: MOVWF int *p. 0007: MOVLW 0008: MOVWF *p=3. 0009: MOVF 000A: MOVWF 000B: MOVLW 000C: MOVWF array[1]=4.point at indirect register . 0007: MOVLW 0008: MOVWF p[1]=3. count = count+2.load in 3 .} One thing to remember is that a pointer should only be indexed when it points to an array.save in location pointed to *(array+1) = 4.into first location of array 0F 0E 01 0E. Is this segment of code correct? int count[10].. Mixing pointers and arrays will produce unpredictable results.load start of array .save in pointed to location EXERCISE: 1.pointer .load array position .and save at pointed location .
the character that is pointed to by p is printed. Then p is incremented to point to the next character in the string. we talked about the two ways that arguments can be passed to functions. } In this example. 93 .2. p++. or in other words a pointer is passed to the function. The second method passes the address to the function. p = value.4 Passing Pointers to Functions In Section 3. } printf(“\n”). “call by value” and “call by reference”.25). At this point any changes made to the variable using the pointer actually change the value of the variable from the calling routine. } void main(void) { int i=0. Pointers may be passed to functions just like any other variables. the pointer p points to the first character in the string. printf(“%d”. int *p.h> void puts(char *p). the “M”. void main(void) { puts(“Microchip is great!”).20.*p+3).15. Another example of passing a pointer to a function is: void IncBy10(int *n) { *n += 10.10. Each time through the while loop. The following example shows how to pass a string to a function using pointers. 7. What value does this segment of code display? int value[5]=(5. } void puts(char *p) { while(*p) { printf(“%c”.*p). The statement while(*p) is checking for the null at the end of the string. #include <16c74.
After the function returns to main. Write a program that passes a float value to a function. 94 . Inside the function. the value of –1 is assigned to the variable. Write a program that passes a fl pointer to a function. } Both of the above examples show how to return a value from a function via the parameter list. the value of –1 is assigned to the function parameter. print the value of the float variable. Incby10(i). EXERCISE: 1. Inside the function. } void main(void) { int i=0. 2. Example: void Incby10(int & n) { n += 10. print the value of the float variable. } The above example may be rewritten to enhance readability using a special kind of pointer parameter called a reference parameter. After the function returns to main ().IncBy10(i).
Unions are a group of variables that share the same memory space. In this chapter we will cover: Structure Basics Pointers to Structures Nested Structures Union Basics Pointers to Unions 95 .Structures and Unions Structures and Unions represent two of C’s most important user defined types. Structures are a group of related variables that can have different data types.
Each of the items within a structure has its own data types. only the name of this type of structure. For example. We will refer to them as members. struct catalog { char author[40]. you might use a structure to hold the name. To access any member of a structure.8. These names are separated by a period. } variable-list. In this example. The following example is for a card catalog in a library. . In general. char pub[40]. } card. you would type: 96 . the name of the structure is catalog. C defines structures in the following way: struct tag-name { type element1. Within the structure each type is one of the valid data types.rev=’a’ where card is the variable name and rev is the member. you must specify both the name of the variable and the name of the member. type element2. you would use card. For example. The operator is used to access members of a structure. Each of the items in the structure is commonly referred to as fields or members. which can be different from each other. address and telephone number of all your customers. type elementn. These types do not need to be the same. unsigned char rev. The variable card is declared as a structure of type catalog. The variable-list declares some variables that have a data type of struct tag-name. To print the author member of the structure catalog. The keyword struct tells the compiler that a structure is about to be defined. it is not the name of a variable. The tag-name is the name of the structure. to access the revision member of the structure catalog. The variable-list is optional. char title[40]. unsigned int data.1 Introduction to Structures A structure is a group of related items that can be accessed through a common name. the information stored in a structure is logically related.
If you want to print the name of the publisher. the second is 1 and. How would you access the title member of the 10th element of the structure array big? big[9].e. you would index the structure variable (i. Once you have defined a structure.list. what does a structure catalog looks like in memory. declare and access a structure. The first element of title is 0.author). What if you wanted to access a specific element in the title.card.title Structure may also be passed to functions.date. big[10]). finally the third is 2. you can define two more variables like this: struct catalog book.title[2]. You can also assign the values of one structure to another simply by using an assignment. if the structure catalog was defined earlier in the program. Now that we know how to define. like the 3rd element in the string? Use card. A function can be return a structure just like any other data type.card. you can create more structure variables anywhere in the program using: struct tag-name var-list. struct catalog big[50]. If you wanted to access an individual structure within the array.printf(“Author is %s\n”. This example declares a 50-element array of the structure catalog. author title pub date rev 40 bytes 40 bytes 40 bytes 2 bytes 1 byte If you wanted to get the address of the date member of the card structure you would use &card. The following fragment is perfectly valid struct temp { 97 . C allows you to declare arrays of structures just like any other data type. you would use printf(“%s”.pub). For instance.
#byte cont = 8.b=53.a=37.data=n.int a. int i. var2 = var1. An example of using this on the PICmicro® to set up an LCD interface would be struct cont_pins { boolean en1.30}. void LcdSendNibble(byte n) { cont.27. //delay cont. the entire structure is passed by the “call by value” method.’N’. and D3-6 will be data.var2.en1=0. } cont. var2. //set en1 line low 98 . NOTE: The :4 notation for data indicates 4 bits are to be allocated for that item. One important thing to note: When you pass a structure to a function.”Jack”. variable var2 will have the same contents as var1. This is an example of initializing a structure. //present data delay_cycles(1). } var1.65. boolean en2. int data:4. } var1[2]={“Rodger”. ‘Y’. char ch. struct example { char who[50]. float b. Therefore. //set en1 line high delay_us(2). After this fragment of code executes the structure.en1=1. In this case D0 will be en1. char c. //enable for all displays //enable for 40x4 line displays //register select //control on port d This sets the structure for cont_pins and is then handled within the program. var1. The number of elements in a structure does not affect the way it is passed to a function. any modification of the structure in the function will not affect the value of the structure in the calling routine. //delay cont. boolean rs.
What is wrong with this section of code? struct type { int i. Since C passes the entire structure to a function. Write a program that has a structure with one character and a string of 40 characters. it is easier to pass a pointer to the structure to the function. char ch. Pointers to structures are declared in the same way that pointers to other data types are declared.q.} EXERCISE: 1. Read a string and save it in the string using gets (). } s. i = 10. 99 . long l. For this reason. you must use the arrow operator as shown here: q->i=1. Then print the values of the members. 2. struct temp { int i. } p. This statement would assign a value of 1 to the number i of the variable p. large structures can reduce the program execution speed because of the relatively large data transfer. Using this definition of the temp structure. Read a character from the keyboard and save it in the character using getch(). the statement q=&p is perfectly valid. For example. . Notice that the arrow operator is a minus sign followed by a greater-than sign without any spaces in between. . the following section of code declares a structure variable p and a structure pointer variable q with structure type of temp. 8. char str[50]. Now that q points to p.2 Pointers to Structures Sometimes it is very useful to access a structure through a pointer.
you must use the arrow operator.i. } 2. p->i=10.One important thing to note is: When accessing a structure member using a structure variable. unsigned char datamem. p.*p. void main(void) { p=&s. } The two lines s. Is this segment of code correct? struct s_type { int a.h> #include <string. Write a program that creates an array of structure three long of the type PICmicro®MCU. or 3. int b. use the period.p->str). When accessing a structure member using a pointer to the structure. and a PIC17CXX device.i=10. 100 . PIC16CXX. 2. strcpy(p->str. void main(void) { p=&s. The format of the structure is: struct PIC { char name[20]. You will need to load the structures with a PIC16C5X. char feature[80]. This example shows how a pointer to a structure is utilized.”I like structures”). The user will select which structure to print using the keyboard to input a 1.a=100. *p. unsigned char progmem. s.p->i.i=10 and p->i=10 are equivalent.s. #include <16c74. EXERCISE: 1. } s.h> struct s_type { int i. } s. char str[80]. printf(“%d %d %s”.
These elements can be accessed using the list1 variable. 8. float cost. unsigned char datamem. However. This is called nesting structures.}. and the cost. The structure product has three elements: an array of PIC structures called devices. char package_type[40]. 101 . }.3 Nesting Structures So far. you have only seen that members of a structure were one of the C data types. struct produtcs { struct PIC devices[NUM_OF_PICS]. a string that has the package name. the members of structures can also be other structures. For example: #define NUM_OF_PICS 25 struct PIC { char name[40]. unsigned char progmem. } list1. char feature[80].
d will access the four byte double d. We will use the previous example to illustrate a union. The integer uses two bytes. you would use the arrow operator just like structures.8. the character array uses three bytes and the double uses four bytes. Assuming that doubles are four bytes long. } temp. . you use a period. } variable-list. type element2. The difference between unions and structures is that each member of the union shares the same data space. . you may only use one variable at a time. type elementn. The way that a union appears in memory is shown below. A union looks very much like a structure. double d.i will access the two byte integer member i of the union temp and temp. The general format of the union is: union tag-name { type element1. the following union contains three members: an integer. However. For example. and a double. union u_type { int i. a character array. It is important to note that the size of the union is fixed at complier time to accommodate the largest member of the union. 102 .4 Introduction to Unions A union is defined as a single memory location that is shared by two or more variables. <--------------------------------------------------double---------------------------------------------------> <---------c[2]----------> <---------c[1]----------> <---------c[0]----------> <---------------------integer----------------------> element0 element1 element2 element3 Accessing the members of the union is the same as with structures. the tag-name is the name of the union and the variable-list are the variables that have a union type tag-name. The statement temp. char c[3]. the union temp will have a length of four bytes. If you are accessing the union through a pointer. The variables that share the memory location may be of different data types. Again.
} When you want to read the A/D. union sample { unsigned char bytes[2]. you would read two bytes of data from the A/D and store them in the bytes array. Your program should print the long int to the screen a byte at a time. signed short word. The microcontroller reads the A/D in two bytes. Write a program that has a union with a long int member and an four byte character array. Then. 103 . whenever you want to use the 12-bit sample you would use word to access the 12-bit number.A good example of using a union is when an 8-bit microcontroller has an external 12-bit A/D converter connected to a serial port. What are the differences between a structure and an union? What are the similarities? 2. EXERCISE: 1. So we might set up a union that has two unsigned chars and a signed short as the members..PIC Specific C Having understood the basics of C. Every compiler has its own good and not so good points. it is now time to move into the PICmicro® MCU specific settings. functions and operations.
9. An example in assembler could be CLRF PAGE1 MOVLW MOVWF PAGE0 PORTA B’00000011’ PORTA . Ports B.1 Inputs and Outputs The Input and Output ports on a PICmicro®MCU are made up from two registers – PORT and PORT DIRECTION – and are designated PORTA. Inputs are set with a 1 and outputs are set with a 0. The 16C74 has PORTS A. DATA BUS PORTA 05h OUTPUTS INPUTS DATA BUS TRISA 85h TO A/D ADCON1 (16C7X ONLY) Port A has 5 or 6 lines – depending on the PIC – which can be configured as either inputs or outputs. A block diagram of PORTA is shown below.C.B.C.C. The exception to the I/O lines is the A4 pin which has an open collector output.D. Their availability depends upon the PIC being used in the design.change back to register page 0 105 . the voltage levels on the pin – if tied high with a resistor – is inverted to the bit in the PORTA register (i.A0.e. Configuration of the port direction is via the TRISA register.send W to port control register .E. An 8pin PIC has a single GPIO register and TRIS register – 6 I/O lines.4 turns on the transistor and hence pulls the pin low).select register page1 .E and TRISA. The pins have both source and sink capability of typically 25mA per pin.D and E are similar but the data sheet needs to be consulted for PIC specifics.set outputs low .C.B. a logic 1 in porta. As a result.D and E – 33 I/O lines.B.1 as inputs.D. A2-4 as outputs .
Fast I/O enables the user to set the port direction and this remains in place until 106 . This adds lines to a program and hence slows down the speed.. but improves the safety side of the code by ensuring the I/O lines are always set as specified. the port direction registers are set up prior to each I/O operation. It does have the additional PICmicro®MCU hardware functions as alternatives to being used as an 8-bit port. In standard mode. Data is read from the port with either MOVFW PORTA or bit testing with BTFSS or BTFSC. or standard. ensure ADCON1 register is also set correctly an I/O default is ANALOG. NOTE: On devices with A/D converters. fast.Data is sent to the port via a MOVWF PORTA and bits can be individually manipulated with either BSF or BCF..
The compiler does not add lines of code to setup the port direction prior to each I/O operation. //mask out unwanted bits switch(b_rate) { case 0: set_uart_speed(1200). byte bd_sw_get() //baud rate selection { byte b_rate. bit_test(variable. case 1: set_uart_speed(2400). The following is the whole function used to read some dip switches and set up a baud rate for a comms routine. set_tris_b(0xff). break. it is advisable to set up the port conditions before the port direction registers (TRIS). break. The following example sets Port B as inputs and then reads in the value. work in binary. bit_clear(variable. On a bit level there are: bit_set(variable. bit). case 2: set_uart_speed(4800). as this will make it easier for you writing and others reading the source code. b_rate = portb & 0b00000011. bit). b_rate = portb. break. This prevents the port from outputting an unwanted condition prior to being set. When setting bit patterns in registers or ports. bit). //used to set a bit //used to clear a bit //used to test a bit 107 . It also saves converting between number bases. Manipulation of data to and from the I/O ports is made easy with the use of numerous built in functions. case 3: set_uart_speed(9600). break. //make inputs //read in port Bit masking is easily achieved by adding the & and the pattern to mask after the port name b_rate = portb & 0b00000011.re-defined. //mask out unwanted bits The value stored in b_rate can then be used to set up a value to return to the calling function. } } When setting up portb.
This applies to ports b – g. enables or disables the weak pullup on port b set_tris_a(value). //get the state or value of a pin output_bit(pin. FindParity(type d) { byte count.f decfsz count. The following example finds the parity of a value d passed to the routine Fiii which is then equated to a when the routine is called.w rrf d. timing constraints. output_low(pin). #asm movlw 8 movwf count clrw loop: xorwf d.2 Mixing C and Assembler There are times when inline assembler code is required in the middle of a C program. //set an output to logic 1 //set an output to logic 0 //set a pin to input or floating On a port wide basic. The reasons could be for code compactness. //set a port pin to a specific value output_float(pin). the following instructions are used: port_b_pullups(true/false). set the combination of inputs and outputs for a given port – set a 1 for input and 0 for output. pin) permanently sets up the data direction register for the port #use standard_io(port) default for configuring the port every time it’s used 9.The above three can be used on variables and I/O b = input(pin).f 108 . Port direction registers are configured every time a port is accessed unless the following pre-processor directives are used: #use fast_io(port) leaves the state of the port the same unless re-configured #use fixed_io(port_outputs=pin. value). or simply because a routine works ‘as is’. mode output_high(pin).
d=7. 0013: MOVF 0014: MOVWF 0015: GOTO 0016: MOVF 0017: MOVWF } 8 count 27. a=FindParity(d).W 27.7) d: Destination select. d=1 Store in file register f (default) Assembler recognizes W and f as destinations.goto movwf #endasm } loop _return_ main() { byte a. constant data or label. txt data W Working register (accumulator) x Don’t care location 109 . the program looks like: FindParity(type d) { byte count. . 25h.W 27 005 21.F 008 21 016 07 26 26.W 25 Key to PIC16Cxx Family Instruction Sets Field Description b: Bit Address within an 8bit file register (0 . f Register file address (0x00 to 0xFF) k Literal field. a=FindParity(d).F 28. d=0 store result in W. } When compiled.d=7.
skip if zero Increment f Increment f. skip if 0 Inclusive OR W and f Move f Move W to f No operation Rotate left f Rotate right f Subtract W from f Swap halves f Exclusive OR W and f Function W + f >> d W . d f. k >> PC 0 >> WDT (and Prescaler.XOR.AND.i Table pointer control. stop oscillator k – W >> W k .NOT.OR. i = 1 increment after instruction execution. f >> d 0 >> f 0 >> W . skip if 0 W .AND. d f. W >> W PC+1 >> TOS. TOS P >> C TOS >> PC 0 >> WDT. d f. d f. OR literal and W Move literal to W Return from Interrupt Return with literal in W Return from subroutine Go into Standby Mode Subtract W from literal Exclusive OR literal and W Function k + W >> W k . d f. d f f. d f. If assigned) k >> PC(9 bits) k . d f f. f >> d f >> d W >> f f – W >> d f(0:3) << f(4:7) >> d W . d f. d f. skip if 0 f + 1 >> d f + 1 >> d. W >> W k >> W TOS >> PC..XOR. i = 0 Do not change. f >> d f – 1 >> d f – 1 >> d. d Description Add W and f AND W and f Clear f Clear W Complement f Decrement f Decrement f. 1 >> GIE k >> W. d f. f >> d Bit Oriented Instructions 110 .
d f. f >> d 0 >> f 0 >>. d f. stop oscillator W >> I/O control register k .OR. d f.AND. skip if zero Increment f Increment f.XOR. d f f. d f. d f. W >> W k >> W W >> OPTION Register k >> W. skip if 0 W . f >> d 111 . b f. d f. If assigned) k >> PC(9 bits) k . f >> d f – 1 >> d f – 1 >> d. d f. b Description Bit clear f Bit set f Bit test.AND. d f. d Description Add W and f AND W and f Clear f Clear W Complement f Decrement f Decrement f.OR. skip if 0 Inclusive OR W and f Move f Move W to f No operation Rotate left f Rotate right f Subtract W from f Swap halves f Exclusive OR W and f Function W + f >> d W . skip if 0 f + 1 >> d f + 1 >> d.Hex 10Ff 14Ff 18Ff 1CFf Mnemonic BCF BSF BTFSC BTFSS f. f >> d f >> d W >> f f – W >> d f(0:3) << f(4:7) >> d W . OR literal and W Move literal to W Load OPTION Register Return with literal in W Go into Standby Mode Tri-state port f Exclusive OR literal and W Function k .XOR. skip if clear Bit test. b. d f f. k >> PC 0 >> WDT (and Prescaler. W >> W PC+1 >> TOS. b f.NOT. TOS P >> C 0 >> WDT.
W COMF f. Digit Carry BTFSC Status. Digit Carry GOTO k BTFSC Status. b f. Carry BSF Status. Carry GOTO k BTFSS Status. Carry GOTO k BTFSC Status. f INCF f. skip if set Function 0 >> f(b) 1 >> f(b) skip if f(b) = 0 skip if f(b) = 1 PIC16C5X/PIC16CXX Special Instruction Mnemonics These instructions are recognized be the Assembler and substituted in the program listing. Digit Carry GOTO k BTFSS Status. b f. Carry Flag Z Clear Carry Clear Digit Carry Clear Zero k Move File to W f . Mnemonic Description ADDCF f. Zero BTFSC Status. Carry BCF Status. b f. Digit Carry BCF Status. d Subtract Carry from File 112 Z Z . Digit Carry BSF Status. Zero BTFSS Status. Zero MOVF f. d BSF Status.d Negative File Set Carry Set Digit Carry Set Zero Skip on Carry Skip on No Carry Skip on Digit Carry Skip on No Digit Carry Skip on Zero Skip on No Zero f. Digit Carry BTFSS Status. Zero BTFSC Status. Zero GOTO k BTFSS Status. b Description Bit clear f Bit set f Bit test. skip if clear Bit test.Bit Oriented Instructions Hex 4bf 5bf 6bf 7bf Mnemonic BCF BSF BTFSC BTFSS f. They are form of shorthand similar to Macros. Zero GOTO k BCF Status. d GOTO k BTFSCTFSS Status. Carry INCF f. Carry BTFSC Status.
shift_lest and shift_right will shift one bit position through any number of bytes. For example: c=’A’. 10010001 bb = shift_left(x. bit_clear and bit_test simply set or clear a bit in a variable or test the state of a single bit.0)) printf(“X is odd”).sizeof(x). If a simple variable or structure was used. Example: int x[3] = {0b10010001. 00011100.1). Digit Carry f. 01000101 //bb is 0 Note: The first parameter is a pointer. // x msb first is: 00000100.0). The second parameter is the number of bytes and the last parameter is the new bit. d Status.sizeof(x). Note. struct { int a.b. short bb. 0b00011100. //x msb first is: 10000001.3 Advanced BIT Manipulation The CCS C compiler has a number of bit manipulation functions that are commonly needed for PICmicro®MCU programs bit_set. 00111001. These functions return as their value 0 or 1 representing the bit shifts out. shitf_left(&y. since x is an array the unsubscripted identifier is a pointer. else printf(“X is even”). In this case. For example: long y. In addition. the & operator must be added. // x msb first is: 00000010. 0b10000001}. 00100010 //bb is 1 bb = shift_left(x. //c is now 01100001 or ‘a’ if(bit_test(x.c} z. 01110010. it allows you to specify the bit value to put into the vacated bit position. 113 .SUBDCF TSTF f. Bits are numbered with the lowest bit (the 1 position) as 0 and the highest bit 7. f Z Z Z 9. d Sub Digit Carry from File f Test File DECF BTFSC DECF MOVF f. d f.0). //c in binary is now 01000001 bit_test(c.5).2. these functions consider the lowest byte in memory the LSB.
//x is now 01101001 9.shitf_right(&z.sizeof(x)). 00111001. Applying a pre-scalar may slow increment. An interrupt may also be generated. The capabilities are as follows: rtcc (timer0) = 8Bit. the timer1 count may be saved in another register when a pin changes. //x msb first is: 10000001. 0b10000001}. For example: int x[3] = {0b10010001.3.0). In capture mode. This timer is used as part of the PWM. May increment on the instruction clock or by an external source. 00011100. May increment on the instruction clock or by an external source. When timer1 overflows from 65535 to 0. an interrupt can be generated (not 16C5X series) timer1 = 16Bit. Applying a pre-scaler may slow increment. 10010001 rotate_left(x. In compare mode.4 Timers All PICmicro®’s have an 8-bit timer and some PIC’s have two more advanced timers. For example: int x. and an interrupt may also be generated. an interrupt can be generated. When timer0 overflows from 255 to 0. 114 . //x msb first is: 00000010. 00100011 The swap function swaps the upper 4 bits and lower 4 bits of a byte. rotate_left and rotate_right work like the shift functions above except the bit shifted out of one side gets shifted in the other side. 0b00011100. x = 0b10010110 swap(x). a pin can be changed when the count reaches a preset value.
} The following is an example using the timer1 capture feature to time how long it takes for pin C2 to go high after pin B0 is driven high: #include <16c74. rtcc_div_256). Applying a pre-scalar may slow increment. //Increments every 1 us setup_ccp1(ccp_capture_re).time). //wait for low time = get_rtcc(). //configure CCP1 to capture rise capture_1=0.timer2 = 8Bit.rcv=PIN_C7) main() { int time. printf(“High time = %u ms.NOWDT #use delay(clock=8000000) #use rs232(baud=9600.NOWDT #use delay(clock=1024000) #use rs232(baud=9600.h> #fuses HS.h> #fuses HS. setup_timer1(t1_internal | t1_div_by_2). //increments 1024000/4*256 times per second //or every millisecond while(!input(PIN_B0)).xmit=PIN_C6. 115 .xmit=PIN_C6. //wait for high set_rtcc(0). while(!input(PIN_B0)). The following is a simple example using the rtcc to time how long a pulse is high: #include <16c74. so it requires a certain number of overflows before the interrupt occurs.”. an interrupt can be generated. setup_counters(rtcc_internal. The interrupt can be slowed by applying a post-scaler.2 //pir1 register //bit 2 = capture has taken place main() { long time. May increment on the instruction clock or by an external source. This timer is used as part of the PWM.rcv=PIN_C7) #bit capture_1 = 0x0c. When timer2 overflows from 255 to 0.
6mV over a 0 to 5 volt range. It is important to note which combination of I/O lines can be used for analog and digital. the resolution becomes 10mV but the working range falls to 0 to 2. However. printf(“Reaction time = %1u us. } ADCON1 ANALOG/ DIGITAL CONTROL PORTA (PORTE) MUX A/D CONVERTOR ADRES A/D RESULT TRISA (TRISE) ADCON0 CONTROL AND STATUS REGISTER 9. output_high(PIN_B0). NOTE: The default for ports having both analog and digital capability is ANALOG. The following tables are extracted from the data sheets. if the reference voltage is reduced to 2. 11. 16C72/3 A0.55 volts.time). time = ccp_1.”. then the measured accuracy is 5/255 = 19. while(!capture_1). If a 5 volt supply is used. This means that the voltage being measured can be resolved to one of 255 values. Other Microchip parts have 10. 12 and 16 bits resolution. .set_timer1(0).55 volts.5 A/D Conversion The A/D in the 16C7x and 12C67x devices has a resolution of 8 bits.
value = read_adc().adc_clock_internal setup_adc_ports(mix) will setup the ADC pins to be analog. This function returns an 8-bit value 00h – FFh on parts with an 8 bits A/D converter. The allowed combinations for mix vary depending on the chip. adc_clock_div_2. A1 A2 A3 A A A A A Vref A D D D D D In C. //sets porta to all analog inputs //points a/d at channel 1 //waits 5 seconds //reads value 117 . set_adc_channel(0-7) select the channel for a/d conversion setup_adc(mode) sets up the analog to digital converter The modes are as follows: adc_off). adc_clock_div_32. The constants all_analog and no_analog are valid for all chips. 16C711 A0. Some other example constants: ra0_ra1_ra2_ra3_analog/a0_ra1_analog_ra3_ref read_adc() will read the digital value fro the analog to digital converter. digital or combination. 16C710. set_adc_channel(1). the setup and operation of the A/D is simplified by ready made library routines. Calls to setup_adc and set_adc_channel should be made sometime before this function is called. On parts with greater than 8 bits A/D the value returned is always a long with the range 000h – FFFFh. adc_clock_div_8. delay_ms(5000).
Common Problems Result Garbled characters Possible reasons parity. and Receive – but what to do with the remaining pins? The voltage levels are between ±3 and ±15 volts allowing plenty of leeway for both drivers and receivers. When connecting equipment with RS232 interfaces. and are documented in the EIA-232-D or CCTT V24/28 specification. The permutations of 9 or 25 pins on a D connector and the software controlling communications are endless. it is important to know which is classified as the Data Controlling Equipment (DCE) and which is Data Terminal Equipment (DTE).6 Data Communications/RS232 RS232 communications between PCs.. stop bits 118 . speed. form part of an engineer’s life. character length. A minimum interface can be 3 wires – Ground.//prints value 9. modems etc. value).printf(“A/D value = %2x\n\r”. Transmit. The problem seems to arise when self built products need to be interfaced to the outside world.
parity checking is not a cast iron method of checking for transmission errors. is even. Start bits always 1 bit Stop bits 1 or 2 bits Data bits 7 or 8 bits Parity bits none if no error detection is required odd or even if error detection is required DATA FORMAT: 8 DATA BITS. The parity system may be either ‘odd’ or ‘even’ and both systems give the same level of error detection.Lost data Double space Overwriting No display of characters Double characters flow control translation of carriage returns or line feeds translation of carriage returns or line feeds duplex operation duplex operation Data Format Data sent via an RS232 interface follows a standard format. with an 8 bits data byte of ‘10101100’ the parity bit would be set to ‘1’. provided that the parity appears correct. is odd. when the receiver carries out the parity check. In an odd parity system. with an 8 bits data byte of ‘10101100’ the parity bit would be set to ‘0’. Thus. the overall count of ‘1’s in the combined data byte. If corruption of either data bytes or of the parity bit itself takes place. the overall count of ‘1’s in the combined data byte. Thus. In an even parity system. plus parity bit. In the event of more than one bit being corrupted. but in 119 . it is possible that the receiver will not recognize the problem.. plus parity bit. So. the corruption will be recognized.
2400 baud = 416uS. The PICmicro®MCU does not have on-chip parity testing or generation. each data bit has a time of 1/(baud rate) This works out as 1200 baud = 833uS. it provides a reasonable level of security in most systems. . it only indicates that an error has occurred and it is up to the system software to react to the error state. | 2 010 SP ! “ # $ % & ‘ ( ) * + < > ? 3 011 0 1 2 3 4 5 6 7 8 9 : . This adds an overhead to the code generated which could have a knock on effect on execution times. . .practice. so the function needs to be generated in software. The parity system does not correct errors in itself. in most systems this would result in a request for re-transmission of the data. Bit Rate Time Calculation As BAUD is bits per second. .
xmit=PIN_C6. the USART can handle full duplex communications. created and tested in user software. the USART can interface to A/D. Data formats acceptable to the USART are: 8 or 9 data bits. rcv=PIN_C7) 122 . odd or even parity. but only half duplex in Synchronous mode. There are pre-set functions which speed up application writing: #use fixed_io(c_outputs=pin_C6) //speeds up port use #use delay(Clock=4000000) //clock frequency #use rs232(baud=4800. D/A and EEPROM devices. Synchronous Slave and Asynchronous. In Asynchronous mode. none. the latter being the most common for interfacing peripherals. Besides the obvious interface to PC’s and Modems. and indication of over run or framing errors on the received data. not a hardware function.
timeout_error=FALSE. the code behaves like hardware UART. char timed_getc() { long timeout.7 I2C Communication 123 . the code generated will use the existing hardware. the hardware is absent. Included in the C compiler are ready-made functions for communications such as: getc. The following is an example of a function that waits up to one-half second for a character. This may prevent hanging in getc() waiting for a character.The CCS compiler has the ability to use the on-board PICmicro®MCU’s UART if one is present. if(kbhit) return(getc()). } } 9. getch. A 0 terminates the string. getchar gets(char *string) waits for and returns a character to be received from the RS232 rev pin reads s string of characters into the variable until a carriage return is received. while(!kbhit&&(++timeout<50000)) //1/2 second delay_us(10). This function is not available in hardware. With the exception of the interrupt on transmit and receive. The software UART has the ability to invert the output data levels. the resulting code generated will be larger. timeout=0. return(0). else { timeout_error=TRUE. sends a single character to the RS232 xmit pin sends a string followed by a line feed and carriage return putc put char puts(s) The function kbhit() may be used to determine if a character is ready. The maximum length of characters is determined by the declared array size for the variable. If. however. This code transparency enables code to be moved from one PICmicro®MCU application to another with minimal effect. removing the need for an external driver/level shifter in logic level applications. If the UART is available in hardware.
I2C_WRTIE(10). I2C_START(). SPI is a two or three wire communication standard to hardware devices. Each slave has a unique address. The master may send or request data to/from any slave.I2C is a popular two-wire communication bus to hardware devices. The receiver can slow down the transfer by holding SCL low while it is busy. Both require a pull-up resistor (1-10K) to +5V. Data is transferred and the receiver specifically acknowledges each byte. as each device is different. Communication begins with a special start condition. I2C_WRTIE(3). } 9. SDA=PIN_B1) main() { int data. The two wires are labeled SCL and SDA. data=I2C_READ(). SPI is usually not a bus. 124 . I2C_WRTIE(2).8 SPI Communication Like I2C. a stop condition is sent on the bus. One wire supplies a clock while the other sends or receives data. I2C_STOP(). Note: The shift_left function returns a single bit (the bit that gets shifted out) and this bit is the data value used in the output_bit function. The LSB of this first byte indicates the direction of data transfer from this point on. A single two wire I2C bus has one master and any number of slaves. There is no standard beyond this. since there is one master and one slave. After all data is transferred. If a third wire is used. followed by the slave address. The following is C code to send a 10 bits command MSB first to a device. I2C_WRTIE(11). I2C_WRTIE(1). NOWDT #use delay(clock=4000000) #use I2C(master.h> #fuses XT. SCL=PIN_B0. I2C_STOP(). The following is an example C program that will send three bytes to the slave at address 10 and then read one byte back. I2C_START(). it may be an enable or a reset. #include <16C74.
i++) //left justify cmd shift_left(cmd.++i) { output_high(PIN_B2). } The following is C code to read a 8 bits response. for(i=1. They can be easily modified to talk to a number of devices. output_high(PIN_B2). shift_left(&data. //enable device //send out 10 data bits each with a clock pulse for(i=0. Slower parts may need some delays to be added. Some PIC’s have built in hardware for SPI. } output_low(PIN_B1). output_high(PIN_B0). main() { int data. output_high(PIN_B0). //disable device output_low(PIN_B0).i<=10.2. Again shift_left is used.++i) { output_bit(PIN_B1. output_high(PIN_B0). shift_left(cmd.0)).input(PIN_B1)).3.i<=8. output_low(PIN_B2). setup_spi(SPI_MASTER | SPI_H_TO_L | SPI_CLK_DIV_16).main() { long cmd. //disable device } The previous are two great examples of shifting data in and out of the PICmicro®MCU a bit at a time. The following is an example of using the built in hardware to write and read.i<=6. main() { int data.1. and the data that is shifted in is bit on the input pin.0). } output_low(PIN_B0). 125 . //enable device //send a clock pulse and //read a data bit eight times for(i=0. //B2 is the clock output_low(PIN_B2). cmd=0x3e1.
the PWM runs on its own without constant software involvement. data=spi_read(0). Most SPI devices ignore all bits until the first 1 bit. 9. output_low(PIN_B0). output_high(PIN_B0).. } Note: Built in hardware does have restrictions. once the registers are set up. output_low(PIN_B0). postscale) initializes timer 2 where mode is T2_DISABLED T2_DIV_BY_1 T2_DIV_BY_4 126 .spi_write(3). the above code sent 16 bits not 10.2 to load into the PR2 register and a resolution of 12. Calculation of the PWM values is best achieved with a simple spreadsheet. period. For example. a 600Hz PWM with a 4MHz oscillator and /16 prescaler will give 103. so this will still work.
0).475KHz and 996Hz. Note. //0. 140. //configure CCP1 as a PWM if(y==2) { setup_timer_2(T2_DIV_BY_4.e.773KHz with a potential resolution of 11 bits The following example will generate one of two tones – determined by the value passed to the routine – for a set duration. value is in the range 0 to period set_ccp1(CCP_PWM). timer 4 set_pwm1_duty(70). set_ccp2(CCP_PWM) this function will initialize the CCP in a PWM mode Example: setup_ccp1(CCP_PWM). the 2 lsb’s will be ignored. i. void sound_bell(byte y) { setup_ccp1(CCP_PWM). The duration is set with a simple delay followed by setting the duty cycle to – silence the output even though the PWM is still running. //140 as offset. 100.2 second bell from terminal } else 127 . half of 140 If oscillator = 4MHz. then frequency will be 1. 0). //sets up for pwm setup_timer_2(T2_DIV_BY_4. The frequency is set with the: setup_timer_2(T2_DIV_BY_4. 100. //50% duty cycle. The frequencies for the two tones are 2. The duty ratio is set with set_pwm1_duty(50). set_pwm1_duty(50). //make this value half of tone delay_ms(200).
which one to handle first. The PIC16C5X series have no interrupts. etc. the PIC hardware follows a fixed pattern as shown below. in the case of multiple interrupts.Interrupts can come from a wide range of sources within the PICmicro®MCU and also from external events.. When an interrupt occurs. When an interrupt occurs. and software written for these products will have to perform a software poll. Depending on which PIC is used in a design. Some of the interrupt sources are shown below. 131 . the first step is to test and determine if the source is the desired one or. the software is all your responsibility. but refer to the data sheet for latest information. the type and number of interrupts may vary.
tmr0. #int_default is used to capture unsolicited interrupts form sources not setup for interrupt action. The main save and restore of registers and startup code is not generated. } 132 . rb. //store character Buff++. This functions set or clear the respective interrupt enable flags so interrupts can be turned on and off during the program execution. //increment pointer } enable_interrupts(level). enable_interrupts(GLOBAL). The character is placed in a buffer and the buffer incremented. disable_interrupts(level). //store character Buff++.#priority sets up the order of the interrupt priority: #priority rtcc. //load character Buffer[Buff+1]=b. Examine the interrupt flags to determine which false interrupt has been triggered. do { while(True).use with care to create your own interrupt handler. ext_int_edge (edge). as the characters are received faster than the display can handle them. #int_xxx – where xxx is the desired interrupt: #int_rda //enable usart receive interrupt rs232_handler() //interrupt driven data read and store { b=getch(). #int_rda //enable usart receive interrupt rs232_handler() //interrupt driven data read and store { b=getch(). ready for the next character. This example forces an interrupt on receipt of a character received via the USART. This function is extracted from an LCD display program. This is used to select the incoming polarity on PORTB bit0 when used as an external interrupt. portb #int_globe . The edge can be 1_to_h or h_to_1. //load character Buffer[Buff+1]=b. //increment pointer } main() { enable_interrupts(INT_RDA).
They contain all the string handling and math functions that will be used in a program. or ‘a’ to ‘z’ isdigit(x) x is an numeric value i. the value returned is a floating-point number. 0-9 islower(x) x is an lower case value i. ’A’ to ‘Z’ isspace(x) x is an space isxdigit(x) x is a hexadecimal digit i. ‘a’ to ‘z’ isupper(x) x is an upper case value i.e. and when. the user requires them.H holds all the complicated math functions. 0-9. or ‘a’ to ‘z’ isalpha(x) x is an alpha value i. ’A’ to ‘F’. ’A’ to ‘Z’. 0-9. Examination of this file gives an insight into how the mathematical functions operate. The various libraries are included as. CTYPE. In all cases.} Include Libraries These libraries add the ‘icing on the cake’ for C programmer. or ‘a’ to ‘f’ STDLIB.e.H contains several traditional macros as follows: Returns a TRUE if: isalnum(x) x is an alphanumeric .e.e.e. ’A’ to ‘Z’.e.
► Attend a Microchip approved training workshop. ICEPIC. If using PIC®START PLUS (programmer only) you will need to use the program-test-modify process – so allow extra development time. MPLAB ICE2000 or ICE4000 allows debugging of hardware and software at the same time. Me. and debug each module stage by stage.e. You will need a programmer to go with the ICE. and PIC MCU sample Microchip ICD for 16F87x family or ICD2 for most Flash PIC MCU In Circuit Emulator (ICE). test. Use some form of I/O map when starting your design to speed up port identification and function. Update documentation at the end of the process. Write. Comment on the software as it’s written.Windows 95. it is meaningless the following day or if read by another. Have a few flash version of PIC MCU chip on hand when developing to save time for waiting. Pointers to get started Start off with a simple program – don’t try to debug 2000 lines of code in one go. C compiler. no window) Development Path Zero Cost Starter Intermediate Serious Demo versions of the C compiler PICSTART PLUS programmer. NT. Use known working hardware. Draw a software functional block diagram to enable modular code writing. 98. 2000. What happens when my program won’t run? Has the oscillator configuration been set correctly when you programmed the PIC? Was the watchdog enabled when not catered for in the software? Have all the ports been initialized correctly? 134 . PIC MASTER. as the device is electrically erasable (i. XP or Linux C Compiler If you then wish to take the development from paper to a hardware design.. you will not need an eraser. see a catalog for part numbers. Otherwise.
com Authors Information Nigel Gardner is an Electronic Engineer of over 20 years industrial experience in various field.Kelly & Pohl C Programming Guide .uk 135 . Make sure no duplication of names given to variables. Tel: 01380 827080 Fax: 01380 827082 Email: info@bluebird-electronics.co. Nigel is a member of the Microchip Consultants Group.piclist.Purdum The C Programming Language – Kernighan & Ritchie Internet Resource List. registers.microchip. especially if code has been moved from one PICmicro®MCU family to another? Reference Literature Microchip data sheets and CDROM for latest product information.bluebird-electronics. custom design work.com 16C7X devices.pic-c.ccsinfo.uk Web:. and lables. He owns Bluebird Electronics which specializes in LCD display products.com Contact Information CCS Microchip. Is the reset vector correct. CCS Reference Manual Microchip MPLAB Documentation and Tutorial Some good reference books on C (in general) Turbo C – Kelly & Pohl An Introduction to Programming in C . PIC support products and Microchip training workshops.co. Ensure the data registers are set to a known condition. check if the ADCON1 register is configured as analog or digital.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview. | https://www.scribd.com/doc/109089757/An-Introduction-to-Programming-the-Microchip-PIC-in-CCS-C | CC-MAIN-2016-30 | refinedweb | 24,556 | 69.28 |
Using ServerSocket.accept() method we can control the number of active socket clients, i.e. set a maximum limit on concurrent active client service threads, allow all currently active clients to terminate before gracefully stopping the server etc. Is there a way to exercise similar degree of control in an RMI server?
Created May 4, 2012
Shaun Childers I don't think it's possible with the RMI classes provided with the JDK (if it is, someone will correct me), but you could design your system to control this. (I'm sure we could all come up with multiple ways for solving this design issue.) The first solution off the top of my head would be the following:I know this is bare bones, but you get the idea, just build it into the design of your system.
* Create an RMI server manager object whose job it is to keep up with how many concurrent connections are executing on the object. This could be done in such a way: (Assume we have the RMIServerManager running and it's a Singleton (one instance) class.)
public class RmiObjectImpl extends UnicastRemoteObject implements RmiObject { ... public void someMethod(Object[] o) throws RemoteException { //before we execute this method check to see if it's OK if (serverManager.proceed()) { //let the server manager we have another client serverManager.register(); //now perform the method functionality ... ... //now be sure to let the server manager know this client is done and leaving serverManager.unregister(); } else throw new TooManyUsersException("Please wait."); } ... }
| http://www.jguru.com/faq/view.jsp?EID=79317 | CC-MAIN-2016-40 | refinedweb | 250 | 54.32 |
This package provides an environment to practice different types of reingorcement learning models.
Project description
pyraceThis library provides a flexible environment to practice different Reinforcment learning models.
Import
from pyrace import Env
Creat an environment
env = Env()
Env classThe methods of this class are pretty similar to the environments provided by the gym library.
render : print an image of the current state of the environment. please make sure to turn the backend of matplotlib to auto (%matplotlib Auto)
reset : reset the environment and output tu initial state.
step : takes an action as input. An action is an array, the size and types in this array are defined by the environment's action space.This method outputs the new state.
load : Input the name of a map and it reset the environment with this new map loaded. Currently only one map is ready (map_1).So, this method is not that useful currently.
sample_action : no input, it output a random sample of the current environement's action space.
get_model_info : print the current environment's metadata. This metadata gives a description of the action space and the observation space.
get_model_list : gives the list of the names of the different available models.
get_map_list : gives the list of the names of the maps available.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pyrace/ | CC-MAIN-2020-45 | refinedweb | 238 | 58.38 |
You can download the App for Ultrabook(TM) from Intel AppUp store Here
When I started to think about ultraboook, I find the device and the concept quite tricky. It is a hybrid of Tablets and Laptops with tons of sensors preloaded. When I looked around the Apps that people are building, I realized that the Apps are mainly Either Entertainment centric or a solution to a problem that can be provided through pure desktop or say tablets. Having used tablets and laptops for long, the main flaw that I see in tablets is their lack of offering any productivity. Most people uses them for surfing, video and reading e-books. I wanted to comeup with a hybrid App that adds value to productivity and at the same time introduce entertainment value to ultrabook. So the App needed to explore the great programming and processing capabilities alongside the touches and nudges. So I am presenting a powerful image utility App, which I have named as ImageGrassy. So what does the app do? It gives you with some cool tools to have fun with your images. It gives you tool to analyze them and it gives you the power to create your custom xml routine to build Image Processing batch application.
So why we require another image processing tool when there are tons of good tools like Photoshop, OpenCV already available?
Image editing tools like Photoshops are difficult to use for image analysis purpose and tools like Matlab are difficult to mod as entertainment station and image editing. OpenCV needs coding.
So here is the hybrid that does all of them (Well to the extend) along with utilizing the power of various sensors.
So you get the basic Idea of how exactly the system is intended to function. The heart of the system is a Gallery which is the user photo gallery with a gallery config that stores the image tags and other information. You can manually add and remove files from gallery.
Once you start your application, gallery is loaded, if you do not have one than it prompts you to create one. The images in the gallery are loaded as thumbnails. You can select any image from the gallery and import that to the main viewer. It renders the image.
User can take the advantage of many utility applications and fun applications that would be ported with this app to readily play with their images. They can also use a Tagging system that would associate features extracted from the image to remember a type of image. User can input an Image for search and it would find out all the related images from the gallery with feature matching process.
The system would also support Template matching. You create small thumbnails of templates like face or other objects and the system should be able to Locate the part of the image that matches with the template.
This is all from a user perspective. The app will come shipped with a developer's Interface. Here you can basically specify the operations serially and the system will process your algorithm as batch and will produce you the desired result.
Here is a Sample 'CodeSnippet' of the new Algorithm level program that is introduced through the App.
start;
I=input of type jpg;
Ig=convert I to GRAY;
Ige=equilize Ig through Histogram;
Ib= convert Ige to BINARY;
Ibr=resize Ib to 128,128;
Ibr=process Ibr with DILATION with 3,3;
M=process Ibr with BLOB_DETECTION;
M=combine M with I;
save M as "rslt.jpg";
end
The fundamental is, it will hide any looping or strong programming fundamentals and programmers can simply work with Algorithms by combining and merging.
We have not yet tested the control statements yet.
Sensors
One of the features planned is adjusting the brightness of the viewer depending upon the light intensity as sensed by the sensor. The size and orientation of the viewing Pane can be changed with touch sensors. Also a Context menu driven operation rendering will be provided. The pane can be resized, rotated, replaced to a different part of the screen using screen touch input.
Image cropping will be given support through Stylus.
A. Ambient Light Sensor
int changed=0;
var brightnessSensor = Windows.Devices.Sensors.LightSensor.GetDefault();
if(brightnessSensor != null)
{
brightnessSensor.ReadingChanged += (sender, eventArgs) =>
{
double mc=WpfImageProcessing.GetMeanContrast(iSrc);
var isLow = eventArgs.Reading.IlluminanceInLux != mc;
if(changed/100>0)
{
Image1.Source=WpfImageProcessing.ContrastStretch(iSrc,.5-(mc-eventArgs.Reading.IlluminanceInLux)/MaxLux;
iSrc=Bitmap.BitmapImageFromBitmapSource(Image1.Source as BitmapSource);
changed=0;
}
else
changed++;
};
}
The logic is, trap the light change event. Now my long career with Arduino, 16F84, and all sort of sensors tells me that such an event might be shot very frequently if you are outdoor (say is railway station). So adjusting the main image brightness with this event handler may actually trigger a flicker. Hence I added a variable to limit the number of refreshes.
The Contrast stretching Logic is as bellow:
public static BitmapSource ContrastStretch(BitmapImage srcBmp, double pc )
{
if (pc == .5)
{
return srcBmp;
}
bool bright = true;
pc = .5 - pc;
if (pc < 0)
{
bright = false;
}
pc = Math.Abs(pc);
Bitmap bmp = new Bitmap(srcBmp);
Bitmap contrast = new Bitmap(srcBmp);// GRAY is the resultant matrix
PixelColor c ;
for (int i = 0; i < bmp.NumRow; i++)
{
for (int j = 0; j < bmp.NumCol; j++)
{
c = bmp.GetPixel(j, i);// Extract the color of a pixel
int rd = c.Red; int gr = c.Green; int bl = c.Blue;
// extract the red,green, blue components from the color.
if (bright)
{
rd = rd + (int)((double)rd * pc);
gr = gr + (int)((double)gr * pc);
bl = bl + (int)((double)bl * pc);
if (rd > 255)
rd = 255;
if (gr > 255)
gr = 255;
if (bl > 255)
bl = 255;
}
else
{
if (bright)
{
rd = rd - (int)((double)rd * pc);
gr = gr - (int)((double)gr * pc);
bl = bl - (int)((double)bl * pc);
if (rd < 0)
rd = 0;
if (gr < 0)
gr = 0;
if (bl < 0)
bl = 0;
}
}
PixelColor c2 =new PixelColor(rd, gr, bl,c.Alpha);
contrast.SetPixel(j, i, c2);
}
}
contrast.Finalize();
return contrast.Image;
}
As you can see, the method considers current contrast value as 50% and then adjusts the same in a Linear scale upto 100% or 10%.
B. GPS Sensor and Compass
Now this is quite interesting. Chris pointed out in my post that 'GPS is a great tool for Image Processing'. I must agree that I somehow did manage to not understand his point. But then I went to my drawing board and realized wow! GPS! So you visit a place, you take images, you create a Gallery. You add a tag of the GPS Location. As you move around, the gallery gets updated and shows you the images corresponding to the location. But is that of any use? Have you ever traversed around any busy streets of any Asian countries like Malaysia, or Delhi/Mumbai/Kolkata in India? Streets look same, crowded, and makes it difficult for you to locate your hotel or other landmarks. So use your Ultrabook to track them.
As you have already understood that the App uses a XML Gallery data which includes several Elements like Tag, Path, IsFacePresent, FaceLocation,Features, We add another Element called GPS and populate the gallery based on this.
What is more, we do have compass which would keep us updated with the direction of motion. So from current location and direction, we calculate the other GPS points that have images on the way in our gallery and shows the overlay.
The utility can get better with an Integration with Bing Map. However at this point, I have not tested this integration yet.
C. Webcam
I know, you will argue that it is not really a sensor! but wait ours is an image processing app that allows us image manipulation, search , adjustment, sharing and what not. So what a better sensor than a capture from your Ultrabook? Chris pointed out that the ultrabook is thin enough and he can actually twist and rotate it easily. So I will assume that it can also be accomodated in outdoor and in events. You may actually want the image processing framework to do stuff on captured images from webcam.
Actually we did integrate this part. Now note that wpf does not support webcam access and you need to go for framework like Microsoft Expression Encoder framework to do it. I tested and did not found any distinct advantage of the same. So I used DirectX with System.Drawing to capture a System.Drawing.Bitmap (Yeah I know it is dirty coding but it works!) and then convert it to BitmapImage of wpf type. Rest of the processing takes the help of our WpfImageProcessing Library to do stuff which is then overlayed over the captured image through ImageBrush.
This viewer is basically a WPF Image type. So the source must be a BitmapImage type. We hold the actual image in a BitmapImage type and assign that as image source to this image.
Now coming to processing part. Wpf does not support Image access as GDI or using System.Drawing.Imaging. It provides you some classes like WritableBitmap, BitmapFrame, BitmapImage which are to be used along with using stream for processing the image. stream or arrays are linear memory, but Image processing needs matrix operations for many tasks. So we First create a Bitmap class which provides the abstraction to entire image operation and we can treat the image the same way that System.Drawing.Bitmap class object is used.
Here is the Bitmap Class. System.Windows.Media.Imaging;
using System.IO;
namespace ImageApplications
{
public struct PixelColor
{
public byte Blue;
public byte Green;
public byte Red;
public byte Alpha;
public PixelColor(int r, int g, int b, int a)
{
this.Red = (byte)r;
this.Green = (byte)g;
this.Blue = (byte)b;
this.Alpha = (byte)a;
}
}
public class Bitmap
{
public BitmapSource Image;
private BitmapImage iSrc;
private byte []array;
private Int32Rect rect;
public int Height, Width;
public int NumCol, NumRow;
public Bitmap(string fileName)
{
iSrc = new BitmapImage(new Uri(fileName));(BitmapImage bmpSrc)
{
iSrc = bmpSrc;;
}
private BitmapImage CropImage(ImageSource source, int width, int height, int startx,int starty)
{
var rect = new Rect(startx, starty, startx+width , height +startx);
var group = new DrawingGroup();
RenderOptions.SetBitmapScalingMode(group, BitmapScalingMode.HighQuality);
group.Children.Add(new ImageDrawing(source, rect));
var drawingVisual = new DrawingVisual();
using (var drawingContext = drawingVisual.RenderOpen())
drawingContext.DrawDrawing(group);
var resizedImage = new RenderTargetBitmap(
width, height, // Resized dimensions
96, 96, // Default DPI values
PixelFormats.Default); // Default pixel format
resizedImage.Render(drawingVisual);
BitmapSource bf= BitmapFrame.Create(resizedImage);
BitmapImage bi = BitmapImageFromBitmapSource(bf);
return bi;
}
public void CropTheImage(int startx, int starty, int width, int height)
{
Image = CropImage(Image, width, height, startx, starty);
}
public void Resize(int rRows,int rCols)
{
TransformedBitmap tb = new System.Windows.Media.Imaging.TransformedBitmap(
Image, new ScaleTransform((double)rRows/iSrc.Height, (double)rCols/iSrc.Width));
Image = tb;
iSrc = BitmapImageFromBitmapSource(Image);(int Height,int Width)
{
WriteableBitmap wb = new System.Windows.Media.Imaging.WriteableBitmap(
Width, Height, 96, 96, PixelFormats.Bgra32, null);
Image = wb;
iSrc = BitmapImageFromBitmapSource(Image);
//BitmapImage bmp=new System.Windows.Media.Imaging.BitmapImage(
//iSrc = bmpSrc;
//iSrc.H;
}
#region Image Processing Stuff
public void Finalize()
{
Image=WriteableBitmap.Create(iSrc.PixelWidth, iSrc.PixelHeight,
96, 96, PixelFormats.Bgra32, null, array, iSrc.PixelWidth * 4);
}
private PixelColor GetPixelValue(int x, int y, byte[] rawpixel, int width, int hight)
{
PixelColor pointpixel;
int offset = y * width * 4 + x * 4;
pointpixel.Blue = rawpixel[offset + 0];
pointpixel.Green = rawpixel[offset + 1];
pointpixel.Red = rawpixel[offset + 2];
pointpixel.Alpha = rawpixel[offset + 3];
return pointpixel;
}
public PixelColor GetPixel(int x,int y)
{
return(GetPixelValue( x, y, array, Width, Height));
}
public void SetPixel(int x,int y, PixelColor color)
{
array=PutPixel(array, Width, Height, color, x, y);
}
private byte[] PutPixel(byte[] rawimagepixel, int width, int hight, PixelColor pixels, int x, int y)
{
int offset = y * width * 4 + x * 4;
rawimagepixel[offset + 0] = pixels.Blue;
rawimagepixel[offset + 1] = pixels.Green;
rawimagepixel[offset + 2] = pixels.Red;
rawimagepixel[offset + 3] = pixels.Alpha;
return rawimagepixel;
}
public static BitmapImage BitmapImageFromBitmapSource(BitmapSource src)
{
BitmapSource bitmapSource = src;
JpegBitmapEncoder encoder = new JpegBitmapEncoder();
MemoryStream memoryStream = new MemoryStream();
BitmapImage bImg = new BitmapImage();
encoder.Frames.Add(BitmapFrame.Create(bitmapSource));
encoder.Save(memoryStream);
bImg.BeginInit();
bImg.StreamSource = new MemoryStream(memoryStream.ToArray());
bImg.EndInit();
memoryStream.Close();
return bImg;
}
public void Save(string filePath)
{
var image = Image;
using (var fileStream = new FileStream(filePath, FileMode.Create, FileAccess.Write))
{
BitmapEncoder encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(image));
encoder.Save(fileStream);
}
}
#endregion
}
}
We work with four color channels: R, G, B, A. Hence we first declare a structure called
PixelColor to represent the color pattern of every pixel.
PixelColor
Three constructors are provided with the Bitmap class. One that reads a file name and initialize a bitmap object, the one which creates sort of a void object with only width and height parameters and one that uses a
BitmapImage to initialize a Bitmap object. We introduce two functions called
GetPixel and SetPixel so that we can access the pixels the same way that we do for
System.Drawing.Bitmap object. However these functions calls GetPixelColor and
SetPixelColor methods which access the pixels directly from 'array' variable which is initialized through the constructor of the Bitmap class with the pixels of
BitmapImage using CopyPixel method. Also a conversion utility to convert from
BitmapSource to BitmapImage is added so that operation results can be returned as either
BitmapImage or BitmapSource as needed.
BitmapImage
GetPixel
SetPixel
System.Drawing.Bitmap
GetPixelColor
SetPixelColor
CopyPixel
BitmapSource
Class support:
As it stands now, the class and all other functions are optimized for Jpeg image type.
Size: Algorithm is tested with 6Mb 4000x4000 pixels images.
We also provide Save option for saving the Bitmap class object as image in your file system.
So now we have basic image class by name
Bitmap and also PixelColor class. Let us start with Image manipulation stuff.
Bitmap
Preprocessing.
It is essentially an operation that is carried out before major image processing.
1. Resize:
WPF does not provide any native support for any image resizing. So we took the advantage of Scale transform to resize the images and that too without compromising the aspect ratio.
The support is provided in Bitmap class itself through resize method.
TransformedBitmap tb = new System.Windows.Media.Imaging.TransformedBitmap(
Image, new ScaleTransform((double)rRows/iSrc.Height, (double)rCols/iSrc.Width));
From the method that uses Resize operation, we will specify the exact size which will be converted to relative percentage and the image will be resized through
ScaleTransform.
ScaleTransform
See the resizing of an image to 64x32 size
2. Gray Scale Conversion
Now this has nothing to do with naive Bitmap support. This is a core image processing stuff. So we put the method in
WpfImageProcessing class.
WpfImageProcessing
public static BitmapSource GrayConversion(BitmapImage srcImage)
{
Bitmap bmp = new Bitmap(srcImage);
for (int i = 0; i < bmp.NumRow; i++)
{
for (int j = 0; j < bmp.NumCol; j++)
{
PixelColor c = bmp.GetPixel(j, i);
//Color c = bmp.GetPixel(j, i);// Extract the color of a pixel
// extract the red,green, blue components from the color.
int rd = c.Red; int gr = c.Green; int bl = c.Blue;
double d1 = 0.2989 * (double)rd + 0.5870 * (double)gr + 0.1140 * (double)bl;
int c1 = (int)Math.Round(d1);
PixelColor c2 = new PixelColor(c1, c1, c1, c.Alpha);
bmp.SetPixel(j, i, c2);
}
}
bmp.Finalize();
return bmp.Image;
}
You can see, we have used the famous formula to read RGB values and convert it to gray.
double d1 = 0.2989 * (double)rd + 0.5870 * (double)gr + 0.1140 * (double)bl;
Observe a finalize method at the end of the method. It is like Lock and Unlock function. Rather than manipulating the original bits of
BitmapImage iSrc of Bitmap class, we perform this only once after the operation is completed. This results in faster operation. As WPF does not support manipulating the
BitmapImage, so we take the help of WritableBitmap to write create a
BitmapImage from the array that was manipulated through image operation and finally assign it to Image variable of the
Bitmap class.
WritableBitmap
Image=WriteableBitmap.Create(iSrc.PixelWidth, iSrc.PixelHeight,
96, 96, PixelFormats.Bgra32, null, array, iSrc.PixelWidth * 4);.
Here is the result of Gray Conversion
3. Binary Conversion
A binary image is essentially an image which has either '0' or '1' for the pixels. But we want to maintain
a homogeneity about the way image is handled and viewed. Hence we are going to represent 1 with 255. So our binary image is also a 3 channel image with values 255 and 0. Any gray scale value > Threshold should be 255 and it should be 0 zero otherwise.
public static BitmapSource Gray2Binary(BitmapImage grayImage,double threshold)
{
if (threshold > 1)
{
throw new ApplicationException("Threshold must be between 0 and 1");
}
if (threshold < 0)
{
threshold = System.Math.Abs(threshold);
}
threshold = 255 * threshold;
Bitmap bmp = new Bitmap(grayImage);
for (int i = 0; i < bmp.NumRow; i++)
{
for (int j = 0; j < bmp.NumCol; j++)
{
PixelColor c = bmp.GetPixel(j, i);
//Color c = bmp.GetPixel(j, i);// Extract the color of a pixel
int rd = c.Red;
double d1 = 0;
if (rd > threshold)
{
d1 = 255;
}
int c1 = (int)Math.Round(d1);
PixelColor c2 = new PixelColor(c1, c1, c1, c.Alpha);
bmp.SetPixel(j, i, c2);
}
}
bmp.Finalize();
return bmp.Image;
}
Here is the sample output of the operation
Important thing here is the routine expects it's Input to be of Gray type. Once you have the result
of Gray conversion in Image1 or main image renderer, you must also update the source BitMapImage or iSrc as:
BitMapImage
iSrc = Bitmap.BitmapImageFromBitmapSource(Image1.Source as BitmapSource);
Dilation and Erosion are two major operations, that are needed over Binary images. Dilation is the process of filling the near by pixels of a Pixel with white and erosion is the reverse. To keep the article short enough, I am omitting the description of these simple methods.
Another color manipulation commonly used in image operation is color inversion. The logic is pretty simple, any pixel color should be inverted with (max-value) theory. So if the red component of a pixel is 10, inverted color value will be 255.
Here is a sample output.
Transform
Transforms are the mathematical modelling or more precisely mapping of data from one domain to another domain to explore hidden behaviour of data. We use different types of Transforms in image processing like wavelet Transform, DCT and So on.
We have already implemented most common Transforms in this studio. For continuity of the discussion, I would show the way we have used wavelet transform.
Wavelet Transform
Wavelet basically gives multi resolution Image. When you subject any image through wavelet transform, you get four sub images, HH,HL,LH,LL. HH is the component which retains the actual image disparity and map. HL,LH,LL are three other rescaled, filtered images. Wavelet transform gives a great means of extracting edges and other features which are essential for pattern recognition.
So our wavelet transform takes two inputs: The image and a function value which can be either 1 or 0. 0 will return Edge from wavelet transform image and 1 will return all the scaled images as a single image.
public static BitmapSource WaveletTransform(BitmapImage bmpSource, int function,int m_threshold)
{
int[,] orgred;
int[,] orgblue;
int[,] orggreen;
int[,] rowred;
int[,] rowblue;
int[,] rowgreen;
int[,] colred;
int[,] colblue;
int[,] colgreen;
int[,] scalered;
int[,] scaleblue;
int[,] scalegreen;
int[,] recrowred;
int[,] recrowblue;
int[,] recrowgreen;
int[,] recorgred;
int[,] recorgblue;
int[,] recorggreen;
Bitmap bitmap = new Bitmap(bmpSource);
// BitmapData bitmapdata = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width,
// bitmap.Height), ImageLockMode.ReadWrite, img.PixelFormat);
orgred = new int[bitmap.Height + 1, bitmap.Width + 1];
orgblue = new int[bitmap.Height + 1, bitmap.Width + 1];
orggreen = new int[bitmap.Height + 1, bitmap.Width + 1];
rowred = new int[bitmap.Height + 1, bitmap.Width + 1];
rowblue = new int[bitmap.Height + 1, bitmap.Width + 1];
rowgreen = new int[bitmap.Height + 1, bitmap.Width + 1];
colred = new int[bitmap.Height + 1, bitmap.Width + 1];
colblue = new int[bitmap.Height + 1, bitmap.Width + 1];
colgreen = new int[bitmap.Height + 1, bitmap.Width + 1];
scalered = new int[bitmap.Height + 1, bitmap.Width + 1];
scaleblue = new int[bitmap.Height + 1, bitmap.Width + 1];
scalegreen = new int[bitmap.Height + 1, bitmap.Width + 1];
recrowred = new int[bitmap.Height + 1, bitmap.Width + 1];
recrowblue = new int[bitmap.Height + 1, bitmap.Width + 1];
recrowgreen = new int[bitmap.Height + 1, bitmap.Width + 1];
recorgred = new int[bitmap.Height + 1, bitmap.Width + 1];
recorgblue = new int[bitmap.Height + 1, bitmap.Width + 1];
recorggreen = new int[bitmap.Height + 1, bitmap.Width + 1];
//unsafe
{
for (int i = 0; i < bitmap.Height; i++)
{
for (int j = 0; j < bitmap.Width; j++)
{
orgred[i, j] = bitmap.GetPixel(j,i).Red;
orggreen[i, j] = bitmap.GetPixel(j, i).Green;
orgblue[i, j] = bitmap.GetPixel(j, i).Blue;
}
}
}
//Transform rows
for (int r = 0; r < bitmap.Height; r++)
{
int k = 0;
for (int p = 0; p < bitmap.Width; p = p + 2)
{
rowred[r, k] = (int)((double)(orgred[r, p] + orgred[r, p + 1]) / 2);
rowred[r, k + (bitmap.Width / 2)] = (int)((double)(orgred[r, p] - orgred[r, p + 1]) / 2);
rowgreen[r, k] = (int)((double)(orggreen[r, p] + orggreen[r, p + 1]) / 2);
rowgreen[r, k + (bitmap.Width / 2)] = (int)((double)(orggreen[r, p] - orggreen[r, p + 1]) / 2);
rowblue[r, k] = (int)((double)(orgblue[r, p] + orgblue[r, p + 1]) / 2);
rowblue[r, k + (bitmap.Width / 2)] = (int)((double)(orgblue[r, p] - orgblue[r, p + 1]) / 2);
k++;
}
}
//Transform columns
for (int c = 0; c < bitmap.Width; c++)
{
int k = 0;
for (int p = 0; p < bitmap.Height; p = p + 2)
{
colred[k, c] = (int)((double)(rowred[p, c] + rowred[p + 1, c]) / 2);
colred[k + bitmap.Height / 2, c] = (int)((double)(rowred[p, c] - rowred[p + 1, c]) / 2);
colgreen[k, c] = (int)((double)(rowgreen[p, c] + rowgreen[p + 1, c]) / 2);
colgreen[k + bitmap.Height / 2, c] = (int)((double)(rowgreen[p, c] - rowgreen[p + 1, c]) / 2);
colblue[k, c] = (int)((double)(rowblue[p, c] + rowblue[p + 1, c]) / 2);
colblue[k + bitmap.Height / 2, c] = (int)((double)(rowblue[p, c] - rowblue[p + 1, c]) / 2);
k++;
}
}
//Scale col
for (int r = 0; r < bitmap.Height; r++)
{
for (int c = 0; c < bitmap.Width; c++)
{
if (r >= 0 && r < bitmap.Height / 2 && c >= 0 && c < bitmap.Width / 2)
{
scalered[r, c] = colred[r, c];
scalegreen[r, c] = colgreen[r, c];
scaleblue[r, c] = colblue[r, c];
}
else
{
scalered[r, c] = Math.Abs((colred[r, c] - 127));
scalegreen[r, c] = Math.Abs((colgreen[r, c] - 127));
scaleblue[r, c] = Math.Abs((colblue[r, c] - 127));
}
}
}
//Set LL = 0
for (int r = 0; r < bitmap.Width / 2; r++)
{
for (int c = 0; c < bitmap.Height / 2; c++)
{
colred[r, c] = 0;
colgreen[r, c] = 0;
colblue[r, c] = 0;
}
}
//Set LL = 0
for (int r = 0; r < bitmap.Height; r++)
{
for (int c = 0; c < bitmap.Width; c++)
{
if (!(r >= 0 && r < bitmap.Height / 2 && c >= 0 && c < bitmap.Width / 2))
{
if (Math.Abs(colred[r, c]) <= m_threshold)
{
colred[r, c] = 0;
}
else
{
//colred[r, c] = 255;
}
if (Math.Abs(colgreen[r, c]) <= m_threshold)
{
colgreen[r, c] = 0;
}
else
{
//colgreen[r, c] = 255;
}
if (Math.Abs(colblue[r, c]) <= m_threshold)
{
colblue[r, c] = 0;
}
else
{
//colblue[r, c] = 255;
}
}
}
}
//Inverse Transform columns
for (int c = 0; c < bitmap.Width; c++)
{
int k = 0;
for (int p = 0; p < bitmap.Height; p = p + 2)
{
recrowred[p, c] = (int)((colred[k, c] + colred[k + bitmap.Height / 2, c]));
recrowred[p + 1, c] = (int)((colred[k, c] - colred[k + bitmap.Height / 2, c]));
recrowgreen[p, c] = (int)((colgreen[k, c] + colgreen[k + bitmap.Height / 2, c]));
recrowgreen[p + 1, c] = (int)((colgreen[k, c] - colgreen[k + bitmap.Height / 2, c]));
recrowblue[p, c] = (int)((colblue[k, c] + colblue[k + bitmap.Height / 2, c]));
recrowblue[p + 1, c] = (int)((colblue[k, c] - colblue[k + bitmap.Height / 2, c]));
k++;
}
}
//Invers Transform rows
for (int r = 0; r < bitmap.Height; r++)
{
int k = 0;
for (int p = 0; p < bitmap.Width; p = p + 2)
{
recorgred[r, p] = (int)((recrowred[r, k] + recrowred[r, k + (bitmap.Width / 2)]));
recorgred[r, p + 1] = (int)((recrowred[r, k] - recrowred[r, k + (bitmap.Width / 2)]));
recorggreen[r, p] = (int)((recrowgreen[r, k] + recrowgreen[r, k + (bitmap.Width / 2)]));
recorggreen[r, p + 1] = (int)((recrowgreen[r, k] - recrowgreen[r, k + (bitmap.Width / 2)]));
recorgblue[r, p] = (int)((recrowblue[r, k] + recrowblue[r, k + (bitmap.Width / 2)]));
recorgblue[r, p + 1] = (int)((recrowblue[r, k] - recrowblue[r, k + (bitmap.Width / 2)]));
k++;
}
}
Bitmap imgPtr = new Bitmap(bmpSource);
// unsafe
{
// byte[] imgPtr = new byte[bitmap.Height*bitmap.Width*4];
int k = 0;
for (int i = 0; i < bitmap.Height; i++)
{
for (int j = 0; j < bitmap.Width; j++)
{
if (function == 0)
{
PixelColor pc=new PixelColor();
pc.Red =(byte) Math.Abs(recorgred[i, j] - 0);
pc.Green = (byte)Math.Abs(recorggreen[i, j] - 0);
pc.Blue = (byte)Math.Abs(recorgblue[i, j] - 0);
imgPtr.SetPixel(j, i, pc);
}
else
{
PixelColor pc = new PixelColor();
pc.Red = (byte)scalered[i, j];
pc.Green = (byte)scalegreen[i, j];
pc.Blue= (byte)scaleblue[i, j];
imgPtr.SetPixel(j, i, pc);
}
}
}
}
imgPtr.Finalize();
return imgPtr.Image;
//bitmap.UnlockBits(bitmapdata);
}
Here is the result of Multi resolution image.
Above is the result from wavelet Edge detection.
Filtering
Filtering is one of the most essential aspect of any image. It is used to remove noise from the image or increase the visibility or sharpness. The Filtering process is defined as a function of convolution of a kernel with image block. So you basically define a Kernel or Filter core as mxn matrix.
Example:
Filtering should multiply the kernel values with associated pixels and then take a sum and replace the central pixel. One of the prominent filter
is median filter. Let us see how median filter works with our App.
public static BitmapSource MedianFilter(BitmapImage srcBmp, int []Kernel)
{
List<int> rd = new List<int>();
List<int> gr = new List<int>();
List<int> bl = new List<int>();
List<int> alp = new List<int>();
int xdir = Kernel[0];
int ydir = Kernel[1];
Bitmap bmp = new Bitmap(srcBmp);
Bitmap median = new Bitmap(srcBmp);
PixelColor c;
for (int i = 0; i < bmp.NumRow; i++)
{
for (int j = 0; j < bmp.NumCol; j++)
{
rd = new List<int>();
gr = new List<int>();
bl = new List<int>();
alp = new List<int>();
int ind = 0;
PixelColor pc=new PixelColor();
for (int i1 = i-ydir; i1 <= i+ydir; i1++)
{
for (int j1 = j-xdir; j1 <= j+xdir; j1++)
{
//
if((j1<bmp.NumCol) && (i1<bmp.NumRow) &&
(j1>=0)&&(i1>=0)&&(i1!=i)&&(j1!=j))
{
pc = median.GetPixel(j1, i1);
rd.Add(pc.Red);
gr.Add(pc.Green);
bl.Add(pc.Blue);
alp.Add(pc.Alpha);
ind++;
}
// catch (Exception ex)
{
}
}
}
if (rd.Count > 0)
{
int red = (int)GetMedian(rd.ToArray());
int green = (int)GetMedian(gr.ToArray());
int blue = (int)GetMedian(bl.ToArray());
int alpha = (int)GetMedian(alp.ToArray());
PixelColor pc2 = new PixelColor(red, green, blue, alpha);
median.SetPixel(j, i, pc2);
}
}
}
median.Finalize();
return median.Image;
}
It basically loops through Image, take a pixel, extract all the pixels around this and find median. Replace the central pixel with the median of the neighbours.
Fundamentally though the size is predefined, at the edges, pixels will not have as many neighbours as desired. So we use List rather than a fixed array to pull the neighbours.
GetMedian method returns the median of a list.
Median is given by center of a sorted array. If the array length is odd, then it is calculated as mean of two center values.
Here is the result of Median Filtering
Talking about filtering, what would we do without a proper convolution
filter that allows the user to perform kernel based processing? One of the simplest form of filter is 3x3 filters as depicted above. Let us explore 3x3 filter for edge detection. We perform convolution with a {-5 0 0,0 0 0,0 0 5} filter to detect edge ( diagonal edge detection).
Here is the code for convolution:
public static BitmapSource ConvFilter(BitmapImage b, int [,]Kernel)
{
Bitmap bmp = new Bitmap(b);
Bitmap rslt = new Bitmap(b);
int nRows = Kernel.GetUpperBound(0);
int nCols = Kernel.GetUpperBound(1);
double sumR = 0, sumG = 0, sumB = 0,sumAlpha=0;
int weight = 0;
int r = 0, c = 0;
int tot = 0;
string s = "";
for (int i = nRows/2; i < bmp.NumRow-nRows/2; i++)
{
for (int j = nCols/2; j < bmp.NumCol-nCols/2; j++)
{
sumR = sumG = sumB = sumAlpha=0;
r = 0;
c = 0;
PixelColor pc = bmp.GetPixel(j, i);
for (int i1 = -nRows/2; i1 <=nRows/2 ; i1++)
{
for (int j1 = -nCols/2; j1 <=nCols/2; j1++)
{
pc = bmp.GetPixel(j1+j, i1+i);
double red = (double)pc.Red;
sumR = sumR+red * (double)Kernel[r, c];
double green = (double)pc.Green;
double blue = pc.Blue;
sumG = sumG + green * (double)Kernel[r, c];
sumB = sumB + blue* (double)Kernel[r, c];
tot++;
c++;
}
r++;
c = 0;
}
if (weight == 0)
{
weight = 1;
}
sumG = 0;
sumB = 0;
//sumR = sumR / (double)tot;
if (sumR > 255)
{
sumR = 255;
}
if (sumR < 0)
{
sumR = 0;
}
if (sumG > 255)
{
sumG = 255;
}
if (sumG < 0)
{
sumG = 0;
}
if (sumB < 0)
{
sumB = 0;
}
if (sumB > 255)
{
sumB = 255;
}
pc.Red = (byte)(int)sumR;
pc.Green = (byte)sumG;
pc.Blue = (byte)sumB;
rslt.SetPixel(j, i, pc);
}
}
rslt.Finalize();
return rslt.Image;
}
One of the problems with convolution filter is that it needs multiplication to be performed either on float or better on double domain.
Where as the pixel values are in byte. So the system needs multiple typecasting. We are looking for a possible fix for this problem.
Here is the result of Edge detection with 3x3 mentioned diagonal mask.
Segmentation
Segmentation is the process of clubbing the colors and normalize them to a nearest color values so that overall number of colors
in the image is reduced. This helps in object detection in images. There are several color segmentation, but this App will support Men-Shift Segmentation and Face segmentation out of the box.
Mean-Shift Segmentation
The fundamental of this technique is very simple. First collect NxN neighbor pixels of a pixel (x,y). Calculate the mean and then check if the center is sufficiently closed to the mean. If so, replace the center pixel with the mean value. But this approach has certain limitations, With new high definitions cameras, there are huge color depths. So variation from one color to another color is huge. Hence I modified the algorithm in following way:
1) Calculate the HSV component from RGB component
2) Run mean shift algorithm on HSV component
3) If center pixel replacement is positive, than replace the mean of RGB in RGB image and mean of HSV in HSV image. So HSV image is automatically clustered along with RGB image. Once the Process is complete, return RGB image.
Here is the code for mean shift algorithm:
public static BitmapSource MeanShiftSegmentation(BitmapImage src, double ThresholdDistance,int radious)
{
Bitmap rgb = new Bitmap(src);
Bitmap hsv = new Bitmap(Bitmap.BitmapImageFromBitmapSource(ConvertImageFromRGB2HSV(src)));
for (int i = 0; i < rgb.NumRow; i++)
{
for (int j = 0; j < rgb.NumCol; j++)
{
double tot=0;
double valsH = 0;
double valsS = 0;
double valsB = 0;
double valsR = 0;
double valsG = 0;
double valsBl = 0;
for (int i1 = -radious; i1 < radious; i1++)
{
for (int j1 = -radious; j1 < radious; j1++)
{
if (((i1 + i) >= 0) && ((i1 + i) < rgb.NumRow) &&
((j1 + j) >= 0) && ((j1 + j) < rgb.NumCol))
{
PixelColor pcHSV = hsv.GetPixel(j1 + j, i1 + i);
valsH+=(double)(pcHSV.Red);
valsS += (double)(pcHSV.Green);
valsB+=(double)(pcHSV.Blue);
PixelColor pcRGB = rgb.GetPixel(j1 + j, i1 + i);
valsR += (double)(pcRGB.Red);
valsG += (double)(pcRGB.Green);
valsBl += (double)(pcRGB.Blue);
tot++;
}
}
}
double mH = valsH/tot;
double mS =valsS/tot;
double mV = valsB/tot;
byte mR =(byte) (valsR/tot);
byte mG = (byte)(valsG/tot);
byte mB = (byte)(valsBl/tot);
PixelColor pcv = hsv.GetPixel( j, i);
PixelColor pcR = new PixelColor();
double avgColor = (Math.Abs(pcv.Red - mH) +
Math.Abs(pcv.Green - mS) + Math.Abs(pcv.Blue - mV)) / 3;
if (avgColor < ThresholdDistance)
{
pcR = new PixelColor(mR, mG, mB, 255);
rgb.SetPixel(j, i, pcR);
}
}
}
rgb.Finalize();
return rgb.Image;
}
Here is the result of segmentation. You can see that the prominent parts are marked with a nearest color and variations are minimum. Edges are retained. By suitably changing the distance and radius parameters, you can change the behavior of segmentation process.
Here is the result of Face Dtection which is based on color thresholding in Ycbr color scale followed by morphological processing.
Fun
What is an Ultrabook App without fun element? People uses Apps to add to their entertainment factor and there are a lot of such things to do with images.
We have already integrated three applications out of the box.
An inpainting is a method by means of which you can replace undesired objects from the images. Say you have a Nice picture with your former Girlfriend and You are looking dam smart. But unfortunately you are getting married to some one else and you dont want to delete the photo. So? Use inpainting, remove your Ex from the image. Or you have a beautiful snap and you find some irritating Electric wires hanging, use inpainting to remove those stuff.
Read my article on inpainting for more details: <<Here>>
Face Collage
Well you have an Album of many friends and you want to create a single Photo by cascading all photos such that the cascaded photo also interprets a picture.
Here is a Collage of my 18 month old Son.
Here goes the code
public static BitmapSource MakeCollage(BitmapImage srcBmp,Bitmap[]allBmp,int BlockSizeRow,int BlockSizeCol)
{
Bitmap Rslt = new Bitmap(srcBmp);// GRAY is the resultant matrix
Bitmap src = new Bitmap(srcBmp);
int NumRow = src.Height;
int numCol = src.Width;
Bitmap srcBlock = new Bitmap(BlockSizeRow, BlockSizeCol);
for (int i = 0; i < NumRow - BlockSizeRow; i += BlockSizeRow)
{
for (int j = 0; j < numCol - BlockSizeCol; j += BlockSizeCol)
{
srcBlock = new Bitmap(BlockSizeRow, BlockSizeCol);
////1. Extract all the pixels in main image block
for (int i1 = 0; i1 < (BlockSizeRow); i1++)
{
for (int j1 = 0; j1 < (BlockSizeCol); j1++)
{
srcBlock.SetPixel(j1, i1, Rslt.GetPixel(j + j1, i + i1));
// System.Threading.Thread.Sleep(15);
}
}
srcBlock.Finalize();
////////// 2. Let us now compare this block with every image in database
double dst = 999999999999999.0;
int small = -1;
for (int k = 0; k < allBmp.Length; k++)
{
double d = 0;
for (int i1 = 0; i1 < BlockSizeRow; i1++)
{
for (int j1 = 0; j1 < BlockSizeCol; j1++)
{
PixelColor c1 = srcBlock.GetPixel(j1, i1);
int rd1 = c1.Red;
int gr1 = c1.Green;
int bl = c1.Blue;
PixelColor c2 = allBmp[k].GetPixel(j1, i1);
int rd2 = c2.Red;
int gr2 = c2.Green;
int bl2 = c2.Blue;
d = d + Math.Abs(rd1 - rd2) + Math.Abs(gr1 - gr2) + Math.Abs(bl - bl2);
}
}
d = d / (double)(BlockSizeRow * BlockSizeCol);
if (d < dst)
{
dst = d;
small = k;
}
}
for (int i1 = 0; i1 < BlockSizeRow; i1++)
{
for (int j1 = 0; j1 < BlockSizeCol; j1++)
{
PixelColor c1 = allBmp[small].GetPixel(j1, i1);
try
{
Rslt.SetPixel(j1 + j, i1 + i, c1);
}
catch (Exception ex)
{
}
}
}
}
//timer1.Enabled = true;
}
Rslt.Finalize();
return Rslt.Image;
}
More description about the technique shown here for collage can be found <here>
Anaglyph
Well, Anaglyphs are basically Red-Cyan channel images which when visualized with 3d Spects or one with a Red and Cyan vision gives a 3d Visualization. This is primary interface provided for 3d vision that is planned as future version of the App.
Here is an Anaglyph generated from a Pair of Stereoscopic images.
Here are the source stereo images
And Here is the 3d View that we would get
This image will be generated by mapping the depth Map of the Anaglyph image with the Left view image. Depth map is the visual disparity associated with the stereoscopic view.
Steganography
To add to fun part, we are also ready with a unique Image steganography process that would allow you to hide an image behind another image of same scale. Yeah you got it right. Source and Payload will be of same size to eliminate the need for extra pixels that are typical characteristic of Steganography.
Here is the Algorithm:
Encoding process:
Decoding process:
Utilities
These are small Applications that comes ported with main application which are often needed for business and other educational purposes.
The utilities that are now almost ready to be shipped in with the Applications are:
Content Based Image Search and Retrieval
First part of content search through template is ready and find the explanation bellow. However for learning machine and content based image search, you need to come back again.
Search a template within an Image
Template matching is a technique where you select one (or more) small image called template and search it over a large image called f. Template matching works on the principle of cross correlation, which means the closeness of the values within a block defined by the size of template t with f.
In the simplest term the process can be explained by following formula:
Software like OpenCV provides a good template matching technique. However with the current framework my primary goal was to eliminate any third party, GDI support and even pointers so that the library could be rewritten in Java with ease. So here is the code for Correlation that I used.
public static Int32Rect TemplateMatch(BitmapImage mainImage, BitmapImage templateImage)
{
Bitmap f=new Bitmap(mainImage);
Bitmap t=new Bitmap(templateImage);
int BlockSizeRow = t.NumRow;
int BlockSizeCol = t.NumCol;
int NumRow = f.NumRow;
int numCol = f.NumCol;
double dst = 0;
int smallX = 0,smallY=0;
double miuF = CalculateMean(f.GetImageAsList());
double stdF = CalculateStdDev(f.GetImageAsList());
double miuT = CalculateStdDev(t.GetImageAsList());
double stdT = CalculateStdDev(t.GetImageAsList());
for (int i = 0; i < NumRow - BlockSizeRow; i ++)
{
for (int j = 0; j < numCol - BlockSizeCol; j ++)
{
double sm = 0;
for (int i1 = 0; i1 < (BlockSizeRow); i1++)
{
for (int j1 = 0; j1 < (BlockSizeCol); j1++)
{
PixelColor fpc = f.GetPixel(j1 + j, i1 + i);
PixelColor tpc = t.GetPixel(j1, i1);
sm =sm+ ((double)fpc.Red - miuF) * ((double)tpc.Red - miuT) / (stdF * stdT);
}
}
sm=sm/(double)(BlockSizeCol*BlockSizeRow);
sm = Math.Abs(sm);
if (sm > dst)
{
dst = sm;
smallX = i;
smallY = j;
}
}
}
return new Int32Rect(smallX, smallY, (int)templateImage.Width, (int)templateImage.Height);
}
In simple words, the above method implements the formula I mentioned. It finds the block that best matches the template over f and returns a rectangle of size of template
starting at the coordinates where best match appears.
But wait, before you step into an opinion that it is easy, I would like to emphasis few facts. As you can see
that template matching is performed in gray scale and the method is spatial, the chances of mismatch is quite high. So all you have to do is, convert spatial template
matching to spectral template matching method by takinf DCT/ wavelet transform over both template and f and then subject them for template matching technique. I have already put wavelet technique . DCT will be covered soon with next update.
Here is the result of template matching process with a screenshot of the simplest interface that I am using to test the algorithms.
I am using a small template as seen at bottom left corner to match in one of the images in my gallery. For overlaying the rectangle over the image, I am using canvas. MainImage source is assigned as Source for an ImageBrush. A rectangle is first filled with source image and is added as child to the canvas. Another rectangle is used to draw the resultant rectangle from TemplateMatch method and is added as another child to canvas.
Here is the overlaying code.);
Rectangle exampleRectangle = new Rectangle();
exampleRectangle.TranslatePoint(new Point(r.X, r.Y), Image1);
exampleRectangle.Width = 128;
exampleRectangle.Height = 128;
// Create a SolidColorBrush and use it to
// paint the rectangle.
ImageBrush myBrush = new ImageBrush();
myBrush.ImageSource = Image1.Source;
exampleRectangle.Fill = myBrush;
canvas.Children.Insert(0, exampleRectangle););
Communication
When I showed the Apps to my wife and asked her to evaluate, she bluntly said "if I were to choose your apps,
I would not. Yes it gives many processing and Image searching and modification capabilities, I can not share anything! And that sucks a big time!"
I was stunned and thought yeah right, anything cool we do, we want to show, we want to share and we want to do it from Apps. So I decided to add a communication framework along with ImageProcessing framework for easy sharing of the photos.
The framework gives to fetching and sharing option out of the box: Facebook and Bluetooth
Firstly we can download images straight from our and our friend's facebook Album. And we can publish the photo in our Image1 main image pane directly to facebook.
A. Facebook Integration
1. Obtaining API KeyFor any Facebook application, The first thing you need is an App approved by Facebook ( It can not contain the name Face in the App name). So open on Apps and Click on Create Apps. Give a Name and Obtain an API key. observe the following screen shots. the green marked fields are important. 2. Configuring your App with API detailsOpen FacebookingTest Solution in Visual Studio.Net 2010. See at the application level the First thing you need is that the App should be authenticated by Facebook.Assuming the fact that you are distributing the App, whoever uses it would need to accept that their Facebook data (Called Graph ) must be allowed to be fetched or accessed by the application. So while connecting to facebook, the application must tell that the Apps is a valid one.After opening the Project, you may see App.config in your solution explorer. Open the file.
Save the Code. Now First step that the App must do is to call the Facebook Autorization service.
passing your App id as the client_id which is requesting facebook to authenticate the App. If the authentication is successful, you will be redirected to with a Session token that confirms the successful Login.
Then you need to create a Picture fetching method.
See when you try to fetch any details like friend list or status update of your friends, you work with 'Get' type of API, and when you want to post something, you play with 'Post' API.This page here gives you the details of all APIs Gives you details of Parameters in Post API.me/feed gives your own feed. You need to look into Post API fields to understand what all you can pass.
Observer that the result is stored in dynamic data object. So It is able to consume different format of data. Result of any facebook API accessing method will be a XML stream. dynamic data type can parse the stream and Enumerate the data objects easily.
Will send you all pictures from yours and your's friend list.
B. Bluetooth Integration
We use InTheHand namespace of 32Feet bluetooth library to develop a mechanism for searching and sending the image. A BluetoothDeviceSelectionDialog will first enable the users to get the bluetooth devices in the proximity followed by selecting a device to transfer the image in Image1.
We will first save the image of Image1 by using the Save utility of our Bitmap class followed by creating a ObexWebRequest with Uri that includes the filename along with the selected device address. We request for a read with post method which writes the image in the selected bluetooth device. Once successful, we delete the local image.
Here is the sample code:
private void sendfile()
{
SelectBluetoothDeviceDialog dialog = new SelectBluetoothDeviceDialog();
dialog.ShowAuthenticated = true;
dialog.ShowRemembered = true;
dialog.ShowUnknown = true;
dialog.ShowDialog();
BitmapImage iSrc = Bitmap.BitmapImageFromBitmapSource(Image1.Source as BitmapSource);
Bitmap bmp = new Bitmap(iSrc);
bmp.Save("my.jpg");
System.Uri uri = new Uri("obex://" +
dialog.SelectedDevice.DeviceAddress.ToString() + "/" + "my.jpg");
ObexWebRequest request = new ObexWebRequest(uri);
request.ReadFile("my.jpg");
ObexWebResponse response = (ObexWebResponse)request.GetResponse();
MessageBox.Show(response.StatusCode.ToString());
response.Close();
}
The fundamental of Gallery Management is very simple. User can have multiple Galleries (like Albums in Facebook). Main Config file stores the details of the Gallery.
<Gallery>
<Name> My Wedding</Name>
<ConfigLocation>MyWedding.xml</ConfigLocation>
<Message> Photos of My Wedding</Message>
<Date>dd.mm.yyyy</Date>
<GeoLocation>
<Latitude>LT</Latitude>
<Longitude>LN</Longitude>
</GeoLocation>
</Gallery>
Each gallery is further stored as node in this config file with same structure. User may choose to use the attributes or they might be left as blank. As seen from the configuration file, it stores the location of independent galleries. Structure of Independent gallery is as follows
<Picture>
<location>c:\Photos\wed1.jpg</location>
<NumFaces>1</NumFaces>
<Face>
<Name>Rupam</Name>
<Rectangle> 20 30 80 110</Rectangle>
<FacePca>1.7 3.49 6.66 2.23 8.41 9.92 9.99 17.6</FacePca>
</Face>
<StandardFeatures>
<ColorFeatures>MR MG MB SR SG SB</ColorFeatures>
<TextureFeatures>MH MS MV SH SS SV<TextureFeatures>
<ShapeFeatures>
<Zernike>
ZM1_M ZM2_M ZM3_M ZM4_M
</Zernike>
<ShapeFeatures>
</StandardFeatures>
</Picture>
It can be seen that every picture is treated as an Element in the Gallery. Every Picture can have multiple faces detected with Skin segmentation and Connected Component as explained above. We extract 8 dominant PCA components from Eigen faces. When you browse an image, the App tries to locate the faces automatically and then marks them. PCA of the rectangle is extracted and is normalized. User can also select the face using Mouse or stylus device. As the database grows, significant information about the faces are available. When new photos are loaded, template matching with PCA is adopted to search the faces and the system becomes really accurate in terms of locating the faces.
With facebook imaging system, you need to manually Tag the faces. But with this APP, it is automatically done. Faces are auto tagged after detection using PCA based face matching. But as you expect, the system misdetection rate is very high at the beginning due to low number of training faces. So we leave a provision for the user to re-tag the faces with correct name. But as your gallery expands, recognition accuracy also increases.
Now Once Faces are extracted, rest of the image is treated as background. We extract Global features from the image as Mean of Red, Green, Blue color components (MR,MG,MB), Mean of HSV Texture Components(MH,MS,MV), Standard Deviation of Color Components ( SR,SG,SB), Standard Deviation of Texture Components ( SH,SS,SV). We Also incorporate Global shape information for proper matching of flowers, animals, foods etc.
Shape features are extracted from Zernike moments. A Zernike moment is moment extracted from polar Harmonic of the images. Initially we restrict ourselves to 4 order Zernike moments. Other shape features that I am trying to integrate are: Polar Histogram and Shape Context.
Once you load an image and search, the search will be performed among all your galleries and a temporary xml will be created with new files that matches the search. User can save the result of search as another Gallery.
Here is a sample screenshot of the tested algorithm.
Left is the Query image, right is the matched image. Lower listbox gives all the matches in order of closeness.
User Mode
1) Use can manage multiple Galleries
2) Faces are Automatically extracted and Tagged
3) Pictures can be Imported through Bluetooth and Facebook
4) Pictures can be published to Facebook or shared through Bluetooth
5) Pictures can be searched based on Geo Location, Features, Faces. Search input can be a simple tag or it can be an Image itself. Result of Search can also be saved as a Gallery
6) User can Create a Face Collage of a Face from all the pictures in a Gallery
7) User can remove unwanted objects from Images using Inpainting Technique
8) Users can hide an Image behind another Image
9) User can Hide text behind the Images
10) User can create Anaglyph image from Steroscopic Pair of Images
11) Depth map extraction and 3-d Rendering of the images
12) GPS based automatic rendering of inter Gallery Pictures
13) Extract Still images from webcam and load into gallery
14) Ambient Light based Brightness adjustment
15) Stylus and Touch based contour Selection
16) Cool Presentation mode with slideshow and slideshow effects.
Developer Mode
1) Use the framework and Simple Algo-Language to develop own image processing routine
3) Easy design of Filters
4) Import the techniques and apply on video
5) Develop face Biometric Solutions
6) Create Custom CBIR System for any contents
In short you can manage Images as contents and explore the great power of image processing.
How it is different from OpenCV or Matlab?
Matlab and OpenCV needs coding to implement techniques and are mainly for developing algorithms. No other software ( At least not in my knowledge) gives the option of having fun with images, at the same time building your own techniques. So if you do not have ImageGrassy, you probably have never know how strong image processing can be!
Final Integration testing is underway for getting it approved to AppStore. Having cleared the first round, the next target is to offer the users with an exclusive experience and fun with their Images. So I am introducing another section, "Product Tour" where you will see the initial screens and functioning of the entire system. Your Suggestions are valuable. Dont forget leaving your comments!
Disclaimer: None of the interfaces or designs are final yet. I am keeping Product Tour Section updated to let you know the current development and also to get valuable suggestions from you!
Why Ultrabook?
You know it! When people visit an ultrabook App store they discover that Apps here are productive tools like any other Microsoft product and not just stripped and deformed version of one of those thousands of Apple Apps. Second the design, the hybrid programming model and cool applications needs a cool platform and in windows 7, it would look bit like a clumsy editor where somebody has clubbed thousands of routines.
Where is the Code?
I have no planning to make this as another freeware that I made and distributed like many others. It is kind of pure Wpf model that abstracts the wpf powers and gives your conventional programming skills a good wing. As the stuff is planned and written for commercial apps, its difficult to release the code. But wait, all the snippets are functional and you can build your own App or use them in your project. All of them work!
Why not other sensors?
Too many cooks spoils the party. So I would limit the App's sensor usage to touch, Stylus and Light Sensing enabled. For God sake it is more of a laptop and you are not going to twist it in your hand(I take my words back " /> ).
Any other cool application that you think is possible with the framework are also welcome. Dont forget to drop a comment about features you think absolutely necessary with the App.
Which Category?
Firstly I thought of Metro, but when I started testing it on a monotonous color Metro Interface, it looked silly. Because the App provides many image operations and manipulations, there are several colors and variations which ultimately takes away the metro look and feel. So I would rather go for desktop app with Education/Entertainment category.
Following Links, concepts and code snippets have been quite helpful and I am thankful to writers of all the following Articles/Snippets
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
1. Article without CBIS and Template matching is released on 10.10.2012
2. Convolution and Edge detection with diagonal filter explained 11.10.2012
3. Template Matching with Normalized cross correlation Explained 11.10.2012
4. Sensors, Webcam, CommunicationFramework integrated and elaborated on 13.10.2012
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Grasshopper.iics wrote:So rotating, turning and twisting it in hands is out of the question
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/473725/ImageGrassy-An-Image-Processing-and-Utility-Tool | CC-MAIN-2015-48 | refinedweb | 8,674 | 58.79 |
The compiler decides whether you can call a method based on the reference type, not the actual object type. Any object that you create is more than a simple Object. It has access to all of the methods of the Class Object.
All these stories are useful for this concept. If there is a Dog object is made as a generic object, you can get the Dog object out of that by type casting. Key here is You must be sure the object is really a Dog.
Object o = al.get(index);
Dog d = (Dog) o; // Type casting
d.roam();
wait a minute , if your are not sure if that is dog or not, you can use instanceof operator to check. Because if you are wrong then you will get a classcastexception at run time.
if ( o instanceof Dog)
{
Dog d = (Dog) o;
}
Iterating again, JVM at run time will check the class of the Reference variable not the class of actual object .
There is problem in the textbook, In a Scenario of Animal -> Dog,Tiger,Hippo,Cat Hierarchy if we need a pet animal behaviors then it is best to create a new class called Pet and have all those behaviors saved in the new class and then make the Cat and Dog extend the new class Pet.
But there is a one problem with that approach, this will lead to multiple inheritance. And Multiple inheritance is not allowed in JAVA.
This is how a new concept call Interface is introduced.
Problem of multiple inheritance is solve using Interface. Interface is 100% Abstract class. All the methods in an Interface is abstract. Any subclass that is implementing this must have the implementation. So at the run time JVM will not be confused, since the subclass has to have the implementation there is no need to double check which one to use.
define
——
public interface pet
{
}
use
—
public class dog extends canine implements pet
{
}
Interface methods are public and abstract, so it is not necessary to specify them using keywords
public interface pet
{
public abstract void befriendly();
public abstract void play();
}
public class dog extends canine implements pet
{
public void befriendly()
{
…
}// implement the methods of pet interface
}
Interfaces do not have any code which is a good, that way the class that is implementing them can define their own methods which is often the case. Interface will define the method name , the signature and return type that will be common for all the classes that implements them. A class can be in any inheritance tree and it can still
implement any Interface that is the power of the Interface.
Objects must be from a class that is a subclass of the polymorphic type.
But with interface as polymorphic type the objects can be from anywhere in the inheritance tree.
some uses of Interface
– Object to save its state to a file, implement Serializable Interface
– Run the methods in a separate thread of execution, implement Runnable interface
you can extend only one class but implement many interfaces.
Now then here is what you need to know.
Class – it does not pass a Is-A test. or any other type
Sub Class – specific version of a Class. Follows the Is-A test
Abstract Class – When you want to create a template for the sub class and may be have few concrete methods.
Interface – When you want o define a role other classes can play
Remember we talked about Super keyword sometime back. Super is used to invoke the
super class methods from within a subclass.
abstract class report
{
void run_report()
{
//generic stuff
}
}
class buzzwordsreport extends report
{
void run_report()
{
super.run_report();// calls the super class version to do some generic stuff
…specific stuff
}
}
Java.lang.Object
Repetition:
Reference variable of type object and be used to call methods defined in class object, regardless of the type of the object to which the reference refers.
ArrayList<dog> is a way to tell compiler that you want only dogs to be put in that list nothing else. ArrayList actually gives only objects. Since the compiler does the typecasting for us at run time, we get the dog out of the arraylist instead of a generic object.
Practice
next page 257
More in next part.
References:
Head First Java 2nd Edition | https://knowingofnotknowing.wordpress.com/2016/06/14/java-beginners-part-11/ | CC-MAIN-2018-22 | refinedweb | 715 | 70.53 |
¤ Home » Programming » C Tutorial » Logical constructs in C - if..else & switch..case statements
The syntax of if statement, or if-else statement which requires evaluation of a condition and branching according to the outcome of the decision test is written as follows:
C treats logical values as integer data type with value 0 for false and non-zero for true. Thus, the <statement block> in the first syntax and <statement block 1> in the second syntax is executed if <expression> evaluates to true, i.e. evaluates to a non-zero value. Even a negative value is treated as true. Note that enclosing the block within curly braces becomes mandatory if the statement block consists of multiple statements. The <statement block 2> is executed when the <expression> in the if statement evaluates to false, i.e evaluates to a zero value.
The if statements can be nested. See some examples below.
Note that these two examples are not the same.
#include <stdio.h> #define TRUE 1 #define FALSE 0 main() { int EQUAL, min, max, x, y; printf("\nProvide two numbers:"); scanf("%d %d", &x, &y); fflush(stdin); /* Compare the two numbers x and y */ if(x == y) { EQUAL = TRUE; min = max = x; } else { EQUAL = FALSE; if(x > y) { min = y; max = x; } else { min = x; max = y; } } if(EQUAL) printf("Both x and y are equal\n"); else printf("Minimum: %d, Maximum: %d \n", min, max); }
1. Identify the mistake in the following C program. #include <stdio.h> main() { float basic; printf("Enter basic: "); scanf("%f", &basic); fflush(stdin); if(basic = 0) printf("Invalid Input\n"); else printf("Basic is %.2f", basic); } 2. Predict the output of the following C program. #include <stdio.h> main() { int a, b, c; scanf("%d %d %d", &a, &b, &c); if(a > b) { a += b; a ++; } if(a > c) a *= c; else c -= (a + b); printf("%d %d %d\n", a, b, c); }
The switch statement is usefull when there are multiple if-else conditions to be checked. The syntax is:
switch <expression> { case <expression 1> <statement block 1> break; case <expression 2> <statement block 2> break; case <expression 3> <statement block 3> break; ... case <expression n> <statement block n> break; default: //When none of the above expressions evaluate to true <statement block n+1> break; }
The switch statement is a compound statement which specifies alternate course of actions. Each alternative is expressed as a group of one or more statements which are identified by one or more labels called case labels. The following two different programs, intended to perform the same task, illustrate how nested if..else can be replaced by a switch..case construct.
/* Program having nested if statements */ #include <stdio.h> main() { char category; printf("Enter Category: "); category = getchar(); fflush(stdin); if(category == 'B') { printf("B.TECH students\n"); /* Statements for B.TECH Processing will go here */ } else if(category == 'M') { printf("M.Sc. Student \n"); /* Statements for M.Sc Processing will go here */ } else if(category == 'T') { printf("M.TECH Student \n"); /* Statements for M.TECH Processing will go here */ } else if(category == 'P') { printf("Ph. D. Student \n"); /* Statements for Ph.D Processing will go here */ } else { printf("ERROR\n"); /* Statements for Error Processing will go here */ } } /* Such programs with multiple condition checks can be better implemented with switch-case construct. The advantage is better readability of the code. */ #include <stdio.h> main() { char category; printf("Enter Category: "); category = getchar(); fflush(stdin); switch(category) { case 'B' : printf("B.TECH students\n"); /* B.TECH Processing */ break; case 'M' : printf("M.Sc. Student \n"); /* M.Sc Processing */ break; case 'T' : printf("M.TECH Student \n"); /* M.TECH Processing */ break; case 'P' : printf("Ph. D. Student \n"); /* Ph. D. Processing */ break; default : printf("ERROR\n"); /* Error Processing */ } }. | http://www.how2lab.com/programming/c/conditional-statements.php | CC-MAIN-2018-47 | refinedweb | 624 | 58.18 |
This will see how to do topic modeling with Python.
What is Topic Modeling
Topic modeling is an unsupervised technique that intends to analyze large volumes of text data by clustering the documents into groups. In the case of topic modeling, the text data do not have any labels attached to it. Rather, topic modeling tries to group the documents into clusters based on similar characteristics.
A typical example of topic modeling is clustering a large number of newspaper articles that belong to the same category. In other words, cluster documents that have the same topic. It is important to mention here that it is extremely difficult to evaluate the performance of topic modeling since there are no right answers. It depends upon the user to find similar characteristics between the documents of one cluster and assign it an appropriate label or topic.
Two approaches are mainly used for topic modeling: Latent Dirichlet Allocation and Non-Negative Matrix factorization. In the next sections, we will briefly review both of these approaches and will see how they can be applied to topic modeling in Python.
Latent Dirichlet Allocation (LDA)
The LDA is based upon two general assumptions:
- Documents that have similar words usually have the same topic
- Documents that have groups of words frequently occurring together usually have the same topic.
These assumptions make sense because the documents that have the same topic, for instance, Business topics will have words like the "economy", "profit", "the stock market", "loss", etc. The second assumption states that if these words frequently occur together in multiple documents, those documents may belong to the same category.
Mathematically, the above two assumptions can be represented as:
- Documents are probability distributions over latent topics
- Topics are probability distributions over words
LDA for Topic Modeling in Python
In this section we will see how Python can be used to implement LDA for topic modeling. The data set can be downloaded from the Kaggle.
The data set contains user reviews for different products in the food category. We will use LDA to group the user reviews into 5 categories.
The first step, as always, is to import the data set along with the required libraries. Execute the following script to do so:
import pandas as pd import numpy as np reviews_datasets = pd.read_csv(r'E:\Datasets\Reviews.csv') reviews_datasets = reviews_datasets.head(20000) reviews_datasets.dropna()
In the script above we import the data set using the
read_csv method of the pandas library. The original data set contains around 500k reviews. However, due to memory constraints, I will perform LDA only on the first 20k records. In the script above we filter the first 20k rows and then remove the null values from the data set.
Next, we print the first five rows of the dataset using the
head() function to inspect our data:
reviews_datasets.head()
In the output, you will see the following data:
We will be applying LDA on the "Text" column since it contains the reviews, the rest of the columns will be ignored.
Let's see review number 350.
reviews_datasets['Text'][350]
In the output, you will see the following review text:
'These chocolate covered espresso beans are wonderful! The chocolate is very dark and rich and the "bean" inside is a very delightful blend of flavors with just enough caffine to really give it a zing.'
Before we can apply LDA, we need to create vocabulary of all the words in our data. Remember from the previous article, we could do so with the help of a count vectorizer. Look at the following script:
from sklearn.feature_extraction.text import CountVectorizer count_vect = CountVectorizer(max_df=0.8, min_df=2, stop_words='english') doc_term_matrix = count_vect.fit_transform(reviews_datasets['Text'].values.astype('U'))
In the script above we use the
CountVectorizer class from the
sklearn.feature_extraction.text module to create a document-term matrix. We specify to only include those words that appear in less than 80% of the document and appear in at least 2 documents. We also remove all the stop words as they do not really contribute to topic modeling.
Now let's look at our document term matrix:
doc_term_matrix
Output:
<20000x14546 sparse matrix of type '<class 'numpy.int64'>' with 594703 stored elements in Compressed Sparse Row format>
Each of 20k documents is represented as 14546 dimensional vector, which means that our vocabulary has 14546 words.
Next, we will use LDA to create topics along with the probability distribution for each word in our vocabulary for each topic. Execute the following script:
from sklearn.decomposition import LatentDirichletAllocation LDA = LatentDirichletAllocation(n_components=5, random_state=42) LDA.fit(doc_term_matrix)
In the script above we use the
LatentDirichletAllocation class from the
sklearn.decomposition library to perform LDA on our document-term matrix. The parameter
n_components specifies the number of categories, or topics, that we want our text to be divided into. The parameter
random_state (aka the seed) is set to 42 so that you get the results similar to mine.
Let's randomly fetch words from our vocabulary. We know that the count vectorizer contains all the words in our vocabulary. We can use the
get_feature_names() method and pass it the ID of the word that we want to fetch.
The following script randomly fetches 10 words from our vocabulary:
import random for i in range(10): random_id = random.randint(0,len(count_vect.get_feature_names())) print(count_vect.get_feature_names()[random_id])
The output looks like this:
bribe tarragon qualifies prepare hangs noted churning breeds zon chunkier
Let's find 10 words with the highest probability for the first topic. To get the first topic, you can use the
components_ attribute and pass a 0 index as the value:
first_topic = LDA.components_[0]
The first topic contains the probabilities of 14546 words for topic 1. To sort the indexes according to probability values, we can use the
argsort() function. Once sorted, the 10 words with the highest probabilities will now belong to the last 10 indexes of the array. The following script returns the indexes of the 10 words with the highest probabilities:
top_topic_words = first_topic.argsort()[-10:]
Output:
array([14106, 5892, 7088, 4290, 12596, 5771, 5187, 12888, 7498, 12921], dtype=int64)
These indexes can then be used to retrieve the value of the words from the
count_vect object, which can be done like this:
for i in top_topic_words: print(count_vect.get_feature_names()[i])
In the output, you should see the following words:
water great just drink sugar good flavor taste like tea
The words show that the first topic might be about tea.
Let's print the 10 words with highest probabilities for all the five topics:
for i,topic in enumerate(LDA.components_): print(f'Top 10 words for topic #{i}:') print([count_vect.get_feature_names()[i] for i in topic.argsort()[-10:]]) print('\n')
The output looks like this:
Top 10 words for topic #0: ['water', 'great', 'just', 'drink', 'sugar', 'good', 'flavor', 'taste', 'like', 'tea'] Top 10 words for topic #1: ['br', 'chips', 'love', 'flavor', 'chocolate', 'just', 'great', 'taste', 'good', 'like'] Top 10 words for topic #2: ['just', 'drink', 'orange', 'sugar', 'soda', 'water', 'like', 'juice', 'product', 'br'] Top 10 words for topic #3: ['gluten', 'eat', 'free', 'product', 'like', 'dogs', 'treats', 'dog', 'br', 'food'] Top 10 words for topic #4: ['cups', 'price', 'great', 'like', 'amazon', 'good', 'br', 'product', 'cup', 'coffee']
The output shows that the second topic might contain reviews about chocolates, etc. Similarly, the third topic might again contain reviews about sodas or juices. You can see that there a few common words in all the categories. This is because there are few words that are used for almost all the topics. For instance "good", "great", "like" etc.
As a final step, we will add a column to the original data frame that will store the topic for the text. To do so, we can use
LDA.transform() method and pass it our document-term matrix. This method will assign the probability of all the topics to each document. Look at the following code:
topic_values = LDA.transform(doc_term_matrix) topic_values.shape
In the output, you will see (20000, 5) which means that each of the document has 5 columns where each column corresponds to the probability value of a particular topic. To find the topic index with maximum value, we can call the
argmax() method and pass 1 as the value for the axis parameter.
The following script adds a new column for topic in the data frame and assigns the topic value to each row in the column:
reviews_datasets['Topic'] = topic_values.argmax(axis=1)
Let's now see how the data set looks:
reviews_datasets.head()
Output:
You can see a new column for the topic in the output.
Non-Negative Matrix Factorization (NMF)
In the previous section, we saw how LDA can be used for topic modeling. In this section, we will see how non-negative matrix factorization can be used for topic modeling.
Non-negative matrix factorization is also a supervised learning technique which performs clustering as well as dimensionality reduction. It can be used in combination with TF-IDF scheme to perform topic modeling. In this section, we will see how Python can be used to perform non-negative matrix factorization for topic modeling.
NMF for Topic Modeling in Python
In this section, we will perform topic modeling on the same data set as we used in the last section. You will see that the steps are also quite similar.
We start by importing the data set:
import pandas as pd import numpy as np reviews_datasets = pd.read_csv(r'E:\Datasets\Reviews.csv') reviews_datasets = reviews_datasets.head(20000) reviews_datasets.dropna()
In the previous section we used thee count vectorizer, but in this section we will use TFIDF vectorizer since NMF works with TFIDF. We will create a document term matrix with TFIDF. Look at the following script:
from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vect = TfidfVectorizer(max_df=0.8, min_df=2, stop_words='english') doc_term_matrix = tfidf_vect.fit_transform(reviews_datasets['Text'].values.astype('U'))
Once the document term matrix is generated, we can create a probability matrix that contains probabilities of all the words in the vocabulary for all the topics. To do so, we can use the
NMF class from the
sklearn.decomposition module. Look at the following script:
from sklearn.decomposition import NMF nmf = NMF(n_components=5, random_state=42) nmf.fit(doc_term_matrix )
As we did in the previous section, let's randomly get 10 words from our vocabulary:
import random for i in range(10): random_id = random.randint(0,len(tfidf_vect.get_feature_names())) print(tfidf_vect.get_feature_names()[random_id])
In the output, you will see the following words:
safest pith ache formula fussy frontier burps speaker responsibility dive
Next, we will retrieve the probability vector of words for the first topic and will retrieve the indexes of the ten words with the highest probabilities:
first_topic = nmf.components_[0] top_topic_words = first_topic.argsort()[-10:]
These indexes can now be passed to the
tfidf_vect object to retrieve the actual words. Look at the following script:
for i in top_topic_words: print(tfidf_vect.get_feature_names()[i])
The output looks like this:
really chocolate love flavor just product taste great good like
The words for the topic 1 shows that topic 1 might contain reviews for chocolates. Lets's now print the ten words with highest probabilities for each of the topics:
for i,topic in enumerate(nmf.components_): print(f'Top 10 words for topic #{i}:') print([tfidf_vect.get_feature_names()[i] for i in topic.argsort()[-10:]]) print('\n')
The output of the script above looks like this:
Top 10 words for topic #0: ['really', 'chocolate', 'love', 'flavor', 'just', 'product', 'taste', 'great', 'good', 'like'] Top 10 words for topic #1: ['like', 'keurig', 'roast', 'flavor', 'blend', 'bold', 'strong', 'cups', 'cup', 'coffee'] Top 10 words for topic #2: ['com', 'amazon', 'orange', 'switch', 'water', 'drink', 'soda', 'sugar', 'juice', 'br'] Top 10 words for topic #3: ['bags', 'flavor', 'drink', 'iced', 'earl', 'loose', 'grey', 'teas', 'green', 'tea'] Top 10 words for topic #4: ['old', 'love', 'cat', 'eat', 'treat', 'loves', 'dogs', 'food', 'treats', 'dog']
The words for topic 1 shows that this topic contains reviews about coffee. Similarly, the words for topic 2 depicts that it contains reviews about sodas and juices. Topic 3 again contains reviews about drinks. Finally, topic 4 may contain reviews about animal food since it contains words such as "cat", "dog", "treat", etc.
The following script adds the topics to the data set and displays the first five rows:
topic_values = nmf.transform(doc_term_matrix) reviews_datasets['Topic'] = topic_values.argmax(axis=1) reviews_datasets.head()
The output of the code above looks like this:
As you can see, a topic has been assigned to each review, which was generated using the NMF method.
Conclusion
Topic modeling is one of the most sought after research areas in NLP. It is used to group large volumes of unlabeled text data. In this article, two approaches to topic modeling have been explained. In this article we saw how Latent Dirichlet Allocation and Non-Negative Matrix Factorization can be used for topic modeling with the help of Python libraries. | https://stackabuse.com/python-for-nlp-topic-modeling/ | CC-MAIN-2019-43 | refinedweb | 2,160 | 53.81 |
# include <sys/types.h> # include <sys/ipc.h>
key_t ftok(const char *pathname, int proj_id);
The resulting value is the same for all pathnames that name the same file, when the same value of proj_id is used. The value returned should be different when the (simultaneously existing) files or the project IDs differ.
Of course no guarantee can be given that the resulting key_t is unique. Typically, a best effort attempt combines the given proj_id byte, the lower 16 bits of the i-node number, and the lower 8 bits of the device number into a 32-bit result. Collisions may easily happen, for example between files on /dev/hda1 and files on /dev/sda1. | http://man.linuxmanpages.com/man3/ftok.3.php | crawl-003 | refinedweb | 115 | 61.97 |
ZF-7228: Zend_Loader will include a file multiple times (no include_once)
Description
In ZF-2923 (SVN 12769), the include_once call on line #83 was changed to include. This created some cases where a single file could be included multiple times and cause a 'Cannot redeclare class' fatal error.
// Register the ZF autoload function require_once 'Zend/Loader/AutoLoader.php'; $loader = Zend_Loader_Autoloader::getInstance(); $loader->setFallbackAutoloader(true); $loader->suppressNotFoundWarnings(); Zend_Locale::$compatibilityMode = false; $localeExists = class_exists('Locale') // Should equal false, but includes Zend/Locale.php
Posted by Jeff Mace (jeffmace) on 2009-07-08T13:38:50.000+0000
Sorry, I just remembered to check the trunk and it looks like it has already been updated there. When will that be moved into a tag?
Posted by Jeff Mace (jeffmace) on 2009-07-08T13:46:46.000+0000
Ok, double sorry. I'm not used to Fisheye so I was looking at an old version. The latest trunk does not have include_once.
Posted by Matthew Weier O'Phinney (matthew) on 2009-07-08T18:41:49.000+0000
First, are you absolutely sure about the error and the fix?
Zend_Loader::loadClass() does a class_exists() check before it ever attempts to load a class via include(). We switched from include_once() to include() as (a) the aforementioned class_exists() check typically short circuits the call in the first place, and (b) it provides some performance gain over its _once() cousin (which must do a stat() call internally to check against its internal path cache).
Second, the case that you present indicates that your include_path is set incorrectly. if class_exists('Locale') is including Zend_Locale, that means that you have library/Zend/ on your include_path -- instead of just library/ -- which is what should be on the path.
Finally, I'd recommend against using the autoloader as a fallback autoloader. One of the key reasons it was developed was to provide a namespaced autoloader -- which helps prevent the very issues you're running against. Always, always, always prefix your classes with your vendor/personal class prefix -- it prevents naming collisions, and assists the autoloader in preventing false positive lookups.
Posted by Jeff Mace (jeffmace) on 2009-07-09T06:30:32.000+0000
To your first points, I can understand that you want to get the performance benefits of dropping include_once to just include, but it does add the possibility in some edge cases for including a file multiple times. The only reason we've found it is because of some practices which are probably more lazy than anything on our part. But I'll get into that after addressing the include_path.
A standard PHP include path looks something like '.:{PEAR libraries}'. Since there is a '.' in the include path, an attempt to load a class of name 'Locale' will include the Zend/Locale.php file because the Loader.php file also sits in that Zend directory. That can be solved by shifting your include_path to ensure the real Locale file gets included before the Zend locale or by removing the '.'. But that is only a fix if there is a Locale.php file elsewhere that you are trying to include.
In this particular case we are running class_exists with the string 'Locale', which doesn't exist. If class_exists returns false, we prepend a prefix and run the check again. The second check finds the proper class and we continue running. This is why the class_exists check doesn't protect us from the include call.
If you are interested in keeping the benefits of the include call over include_once, could I suggest adding a setting to Zend_Loader that determines if it should use include or include_once?
Posted by Patrick van Dissel (tdm) on 2009-09-29T06:27:27.000+0000
I'm with Jeff Mace. I just got the same problem, we're looping over 2 different namespaces to autoload a class depending on the first namespace that it finds it in. With include_once it works perfectly, but with the current 1.9.2 and 1.9.3PL1 it just give me a blank error page without error.
Note that I only have this problem running it on Zend Platform Enterprise Edition with code acceleration and such options enabled, locally on a WAMPserver it does not occur. Also note that the Zend_Loader::loadFile() function has an option to use include or include_one, but it's not used withing the loadClass() function when you do not give it any directories.
I think all includes should be an include_once/require_once, no matter what.
Posted by Patrick van Dissel (tdm) on 2009-09-29T07:47:47.000+0000
Addition to my comment above:
Zend_Loader @line: 82&83:
can be replaced by the following line:
Posted by Radek (salac.r) on 2010-01-05T23:52:46.000+0000
I have just the same problem, Our class Auth was included as Zend_Auth and it caused problem "Cannot redeclare class". I spent 4 hours to figure out the reason. Please repair these bug. Thanks
Posted by Matthew Weier O'Phinney (matthew) on 2010-01-12T13:51:16.000+0000
Fixed in trunk, to release with 1.10.0. | http://framework.zend.com/issues/browse/ZF-7228?actionOrder=desc | CC-MAIN-2013-48 | refinedweb | 848 | 65.73 |
Shutdown Hooks are a special construct that allow developers to plug in a piece of code to be executed when the JVM is shutting down. This comes in handy in cases where we need to do special clean up operations in case the VM is shutting down.
Handling this using the general constructs such as making sure that we call a special procedure before the application exists (calling System.exit(0) ) will not work for situations where the VM is shutting down due to an external reason (ex. kill request from O/S), or due to a resource problem (out of memory). As we will see soon, shutdown hooks solve this problem easily, by allowing us to provide an arbitrary code block, which will be called by the JVM when it is shutting down.
From the surface, using a shutdown hook is downright straight forward. All we have to do is simply write a class which extends the java.lang.Thread class, and provide the logic that we want to perform when the VM is shutting down, inside the public void run() method. Then we register an instance of this class as a shutdown hook to the VM by calling Runtime.getRuntime().addShutdownHook(Thread) method. If you need to remove a previously registered shutdown hook, the Runtime class provides the removeShutdownHook(Thread) method as well.
For Example :
public class ShutDownHook { public static void main(String[] args) { Runtime.getRuntime().addShutdownHook(new Thread() { public void run() { System.out.println("Shutdown Hook is running !"); } }); System.out.println("Application Terminating ..."); } }
When we run the above code, you will see that the shutdown hook is getting called by the JVM when it finishes execution of the main method.
Output:
Application Terminating ... Shutdown Hook is running !
Simple right? Yes it is.
While it is pretty simple to write a shutdown hook, one needs to know the internals behind the shutdown hooks to make use of those properly. Therefore, in this article, we will be exploring some of the ‘gotchas’ behind the shutdown hook design.
1. Shutdown Hooks may not be executed in some cases!
First thing to keep in mind is that it is not guaranteed that shutdown hooks will always run. If the JVM crashes due to some internal error, then it might crash down without having a chance to execute a single instruction. Also, if the O/S gives a SIGKILL () signal (kill -9 in Unix/Linux) or TerminateProcess (Windows), then the application is required to terminate immediately without doing even waiting for any cleanup activities. In addition to the above, it is also possible to terminate the JVM without allowing the shutdown hooks to run by calling Runime.halt() method.
Shutdown hooks are called when the application terminates normally (when all threads finish, or when System.exit(0) is called). Also, when the JVM is shutting down due to external causes such as user requesting a termination (Ctrl+C), a SIGTERM being issued by O/S (normal kill command, without -9), or when the operating system is shutting down.
2. Once started, Shutdown Hooks can be forcibly stopped before completion.
This is actually a special case of the case explained before. Although the hook starts execution, it is possible to be terminated before it completes, in cases such as operating system shutdowns. In this type of cases, the O/S waits for a process to terminate for a specified amount of time once the SIGTERM is given. If the process does not terminate within this time limit, then the O/S terminates the process forcibly by issuing a SIGTERM (or the counterparts in Windows). So it is possible that this happens when the shutdown hook is half-way through its execution.
Therefore, it is advised to make sure that the Shutdown Hooks are written cautiously, ensuring that they finish quickly, and do not cause situations such as deadlocks. Also, the JavaDoc [1] specifically mentions that one should not perform long calculations or wait for User I/O operations in a shutdown hook.
3. We can have more than one Shutdown Hooks, but their execution order is not guaranteed.
As you might have correctly guessed by the method name of addShutdownHook method (instead of setShutdownHook), you can register more than one shutdown hook. But the execution order of these multiple hooks is not guaranteed by the JVM. The JVM can execute shutdown hooks in any arbitrary order. Moreover, the JVM might execute all these hooks concurrently.
4. We cannot register / unregister Shutdown Hooks with in Shutdown Hooks
Once the shutdown sequence is initiated by the JVM, it is not allowed to add more or remove any existing shutdown hooks. If this is attempted, the JVM throws IllegalStateException.
5. Once shutdown sequence starts, it can be stopped by Runtime.halt() only.
Once the shutdown sequence starts, only Runtime.halt() (which forcefully terminates the JVM) can stop the execution of the shutdown sequence (except for external influences such as SIGKILL). This means that calling System.exit() with in a Shutdown Hook will not work. Actually, if you call System.exit() with in a Shutdown Hook, the VM may get stuck, and we may have to terminate the process forcefully.
6. Using shutdown hooks require security permissions.
If we are using Java Security Managers, then the code which performs adding / removing of shutdown hooks need to have the shutdownHooks permission at runtime. If we invoke this method without the permission in a secure environment, then it will result in SecurityException.
References :.lang.Runtime class in Java
- Callable and Future in Java
- Java.util.concurrent.CyclicBarrier in Java
- Garbage Collection in Java
- CountDownLatch. | https://www.geeksforgeeks.org/jvm-shutdown-hook-java/ | CC-MAIN-2018-34 | refinedweb | 935 | 63.39 |
This guide will teach you how to use your Python programming skills to conquer your math class!
A calculator can be faster for simple calculations, but there are a lot of benefits to using iPython. If you don't know what iPython is, check it out here. iPython is an interactive python shell with a lot of rich functionality.
Once installed, you can start the iPython shell by opening your Terminal application and typing:
ipython
One major benefit of using iPython for your calculations is that you can easily see your history. Many math problems have multiple steps and it's helpful to have your previous calculations visible.
Even more valuable is the ability to save your output as a variable. If you're new to programming this is extremely valuable for problems with multiple steps.
rate = 700.0/15000 22000 * rate -> 1026.6666666666667
Using the Python shell will help you learn the language and programming in general.
Python, like many other languages, comes with great built-in functionality. For example, Python's math module comes with all sorts of helpful functionality like powers and logarithms, trigonometric functions, hyperbolic functions, and more. The best way to learn this and any other module is to read the docs, but here's a small taste of what is available.
import math math.log(3) -> 1.0986122886681098 # constants math.e -> 2.718281828459045 math.log(math.e) -> 1.0
Helpful built in functions:
# rounding round(1.234499, 3) -> 1.234 # absolute value abs(-3) -> 3 # power pow(3, 2) -> 9.0 # sum sum([0, 8, 11, 20, 5]) -> 44
from fractions import Fraction Fraction(1, 2) + Fraction(3, 5) -> Fraction(11, 10)
Oftentimes in your math class you'll have to solve the same kind of problem over and over. You can write your own functions in Python to save time.
To solve the quadratic equation (ax**2 + bx + c = 0), you can write a function that represents the quadratic formula.
import math def quadratic(a, b, c): # the discriminant d = (b**2) - (4 * a * c) # find both solutions s1 = (-b + math.sqrt(d)) / (2 * a) s2 = (-b - math.sqrt(d)) / (2 * a) print("Solution is {0}, {1}".format(s1, s2))
And to use your function:
quadratic(1, 5, 6) # Solution is -2.0, -3.0
Good programmers are often considered lazy because they like to automate repetitive tasks. If you find yourself wasting time by writing the same formula over and over, it might be time to write a function. If you get really good at this and use it frequently, you might want to learn how to write your own Python module to organize your functions. | https://howchoo.com/g/mdzhytg1odc/how-to-conquer-your-math-class-with-python | CC-MAIN-2020-05 | refinedweb | 444 | 73.78 |
Here's a patch written by: Paul Eggert <address@hidden> Nima Nikzad <address@hidden> Max Chang <address@hidden> Alexander Nguyen <address@hidden> Sahil Amoli <address@hidden> Nick Graham <address@hidden> This patch fixes a bug in 'sort' where it incorrectly deduces the number of available file descriptors by doing the equivalent of 'ulimit -n'. This method is incorrect, since many file descriptors could already be open, eating into the maximum available. I'm not quite up to speed on how you want ChangeLog entries formatted for submissions by so many authors, but there are papers on file for all these authors. One other thing: we have another team who has drafted a different patch for this problem, which I'll send soon. You may want to wait until you see that one as well; I'll compare the two patches when I send out the message about the other patch. ----- * doc/coreutils.texi (sort invocation): Document that we now silently lower nmerge if necessary. * src/sort.c (OPEN_MAX): Remove; no longer needed. (specify_nmerge): Don't use getrlimit, as the value it returns (max value "open" can return) is not the value that we want (how many file descriptors can we open?). Instead, just check for outlandish values; later on, we'll lower it. (lower_nmerge_if_necessary): New function. (main): Call it. * tests/Makefile.am (@Tests): Added new 'sort-merge-fdlimit' test case. * tests/misc/sort-merge (@Tests): Adjust to new behavior: 'sort' no longer looks at ulimit and no longer reports that (misleading) value to user. * tests/misc/sort-merge-fdlimit (@Tests): New file. --- doc/coreutils.texi | 14 ++++-- src/sort.c | 98 +++++++++++++++++++++++----------------- tests/Makefile.am | 1 + tests/misc/sort-merge | 3 +- tests/misc/sort-merge-fdlimit | 54 ++++++++++++++++++++++ 5 files changed, 121 insertions(+), 49 deletions(-) create mode 100644 tests/misc/sort-merge-fdlimit diff --git a/doc/coreutils.texi b/doc/coreutils.texi index 2c1fae5..5478fc2 100644 --- a/doc/coreutils.texi +++ b/doc/coreutils.texi @@ -3937,13 +3937,17 @@ and I/0. Conversely a small value of @var{nmerge} may reduce memory requirements and I/0 at the expense of temporary storage consumption and merge performance. -The value of @var{nmerge} must be at least 2. +The value of @var{nmerge} must be at least 2. The default value is +currently 16, but this is implementation-dependent and may change in +the future. The value of @var{nmerge} may be bounded by a resource limit for open -file descriptors. Try @samp{ulimit -n} or @samp{getconf OPEN_MAX} to -to display the limit for a particular system. -If the value of @var{nmerge} exceeds this limit, then @command{sort} will -issue a warning to standard error and exit with a nonzero status. +file descriptors. The commands @samp{ulimit -n} or @samp{getconf +OPEN_MAX} may display limits for your systems; these limits may be +modified further if your program already has some files open, or if +the operating system has other limits on the number of open files. If +the value of @var{nmerge} exceeds the resource limit, @command{sort} +silently uses a smaller value. @item -o @var{output-file} @itemx address@hidden diff --git a/src/sort.c b/src/sort.c index 7b0b064..98aecf4 100644 --- a/src/sort.c +++ b/src/sort.c @@ -76,13 +76,6 @@ struct rlimit { size_t rlim_cur; }; # endif #endif -#if !defined OPEN_MAX && defined NR_OPEN -# define OPEN_MAX NR_OPEN -#endif -#if !defined OPEN_MAX -# define OPEN_MAX 20 -#endif - #define UCHAR_LIM (UCHAR_MAX + 1) #ifndef DEFAULT_TMPDIR @@ -1102,53 +1095,72 @@ static void specify_nmerge (int oi, char c, char const *s) { uintmax_t n; - struct rlimit rlimit; enum strtol_error e = xstrtoumax (s, NULL, 10, &n, NULL); - /* Try to find out how many file descriptors we'll be able - to open. We need at least nmerge + 3 (STDIN_FILENO, - STDOUT_FILENO and STDERR_FILENO). */ - unsigned int max_nmerge = ((getrlimit (RLIMIT_NOFILE, &rlimit) == 0 - ? rlimit.rlim_cur - : OPEN_MAX) - - 3); - if (e == LONGINT_OK) { nmerge = n; - if (nmerge != n) + if (nmerge != n || INT_MAX - 1 < nmerge) e = LONGINT_OVERFLOW; - else - { - if (nmerge < 2) - { - error (0, 0, _("invalid --%s argument %s"), - long_options[oi].name, quote(s)); - error (SORT_FAILURE, 0, - _("minimum --%s argument is %s"), - long_options[oi].name, quote("2")); - } - else if (max_nmerge < nmerge) - { - e = LONGINT_OVERFLOW; - } - else - return; - } } - if (e == LONGINT_OVERFLOW) + if (e != LONGINT_OK) + { + if (e == LONGINT_OVERFLOW) + error (SORT_FAILURE, 0, _("--%s argument %s too large"), + long_options[oi].name, quote(s)); + xstrtol_fatal (e, oi, c, long_options, s); + } + + if (nmerge < 2) { - char max_nmerge_buf[INT_BUFSIZE_BOUND (unsigned int)]; - error (0, 0, _("--%s argument %s too large"), + error (0, 0, _("invalid --%s argument %s"), long_options[oi].name, quote(s)); error (SORT_FAILURE, 0, - _("maximum --%s argument with current rlimit is %s"), - long_options[oi].name, - uinttostr (max_nmerge, max_nmerge_buf)); + _("minimum --%s argument is %s"), + long_options[oi].name, quote("2")); } - else - xstrtol_fatal (e, oi, c, long_options, s); +} + +/* Lower NMERGE if necessary, to fit within operating system limits. */ +static void +lower_nmerge_if_necessary (void) +{ + /* How many file descriptors are needed per input and output file. */ + int fds_per_file = (compress_program ? 3 : 1); + + /* How many file descriptors we think we'll need, total. It + obviously can't exceed INT_MAX. */ + int fds_needed = (nmerge <= INT_MAX / fds_per_file - 1 + ? (nmerge + 1) * fds_per_file + : INT_MAX); + + int *fd = xnmalloc (fds_needed, sizeof *fd); + int fds = 0; + + /* Find out how many file descriptors we actually can create, and set + NMERGE accordingly. */ + + int nmerge_val = 0; + + if (pipe (fd) == 0) + { + for (fds = 2; fds < fds_needed; fds++) + { + fd[fds] = dup (fd[0]); + if (fd[fds] < 0) + break; + } + nmerge_val = fds / fds_per_file - 1; + } + + if (nmerge_val < 2) + error (SORT_FAILURE, EMFILE, _("cannot merge")); + nmerge = nmerge_val; + + while (0 <= --fds) + close (fd[fds]); + free (fd); } /* Specify the amount of main memory to use when sorting. */ @@ -3381,6 +3393,8 @@ main (int argc, char **argv) files = − } + lower_nmerge_if_necessary (); + /* Need to re-check that we meet the minimum requirement for memory usage with the final value for NMERGE. */ if (0 < sort_size) diff --git a/tests/Makefile.am b/tests/Makefile.am index 07e9473..bb2aaad 100644 --- a/tests/Makefile.am +++ b/tests/Makefile.am @@ -202,6 +202,7 @@ TESTS = \ misc/sort-compress \ misc/sort-files0-from \ misc/sort-merge \ + misc/sort-merge-fdlimit \ misc/sort-rand \ misc/sort-version \ misc/split-a \ diff --git a/tests/misc/sort-merge b/tests/misc/sort-merge index e360d1c..f11ac1a 100755 --- a/tests/misc/sort-merge +++ b/tests/misc/sort-merge @@ -57,8 +57,7 @@ my @Tests = ['nmerge-big', "-m --batch-size=$bigint", @inputs, {ERR_SUBST=>'s/(current rlimit is) \d+/$1/'}, - {ERR=>"$prog: --batch-size argument `$bigint' too large\n". - "$prog: maximum --batch-size argument with current rlimit is\n"}, + {ERR=>"$prog: --batch-size argument `$bigint' too large\n"}, {EXIT=>2}], # This should work since nmerge >= the number of input files diff --git a/tests/misc/sort-merge-fdlimit b/tests/misc/sort-merge-fdlimit new file mode 100644 index 0000000..82305be --- /dev/null +++ b/tests/misc/sort-merge-fdlimit @@ -0,0 +1,54 @@ +#!/bin/sh +# Test whether sort avoids opening more file descriptors than it is +# allowed when merging files. + +# + sort --version +fi + +. $srcdir/test-lib.sh +require_ulimit_ + +mkdir in err || framework_failure + +fail=0 + +for i in `seq 17`; do + echo $i >in/$i +done + +# When these tests are run inside the automated testing framework, they +# have one less available file descriptor than when run outside the +# automated testing framework. If a test with a batch size of b fails +# inside the ATF, then the same test with batch size b+1 may pass outside +# the ATF but fail inside it. + +# The default batch size (nmerge) is 16. +(ulimit -n 19 \ + && sort -m --batch-size=16 in/* 2>err/merge-default-err \ + || ! grep "open failed" err/merge-default-err) || fail=1 + +# If sort opens a file (/dev/urandom) to sort by random hashes of keys, +# it needs to consider this file against its limit on open file +# descriptors. +(ulimit -n 20 \ + && sort -mR --batch-size=16 in/* 2>err/merge-random-err \ + || ! grep "open failed" err/merge-random-err) || fail=1 + +Exit $fail -- 1.5.4.3 | http://lists.gnu.org/archive/html/bug-coreutils/2009-03/msg00070.html | CC-MAIN-2015-22 | refinedweb | 1,306 | 56.35 |
Images linked in db not displayed
I am a begginner starting with new project (Nette 2.4). There is a post saved in db that contains a html <img> tag, but the image is not displayed, although I use |noescape.
The image link iteself is working, it is displayed properly when inserted directly into template.
Could anybody please help? Thanks a lot
- petr.jirous
- Member | 128
how is your template rendered? paste final html please
Template:
{block content}
<div class=“date”>{$post->created_at|date:‘%d. %m. %Y’}</div>
<div class=“post”>{$post->content|noescape}</div>
{/block}
Image contained in a post is located at:
{$basePath}/gallery/posts/test.jpg
I'm using the basic app mentioned here
Tracy enabled.
Last edited by jarjar (2016-12-31 11:42)
- David Kregl
- Member | 52
You definitely should use {$basePath} until you got some filters, which always return you the right path. Because even if it seems to work now, later, on the production server or under different circumstances it might not work as expected.
Always try to come up with a more general solution.
A class with the filter could look like this:
<?php namespace AppBundle\Model\Filters; use Nette\Http\IRequest; class ImagePath { /** * @var string */ private $wwwDir; /** * ImagePath constructor. * @param IRequest $httpRequest */ public function __construct(IRequest $httpRequest) { $this->wwwDir = $httpRequest->getUrl()->getBasePath(); } public function __invoke($imageName) { return $this->wwwDir . '/uploads/photos/original/' . $imageName; } }
This is how you register a new filter in Presenter (usually BasePresenter)
protected function beforeRender() { parent::beforeRender(); $template->addFilter('imagePath', new ImagePath($this->getHttpRequest())); }
And this is how you use it in Latte
<img src="{$post->image|imagePath}" alt"{$post->title}">
Last edited by David Kregl (2017-01-04 06:53)
@DavidKregl
I'm actually a fan of not using $basePath at all, I just assume all websites I'm building will be on top level so code is actually cleaner (I like better src=“/dir/file.ext” then src=“{$basePath}dir/file.ext”)
I really don't see a problem my website not running in some subdirectory, in the end you can always really easy create subdomain…
$basePath caused much issues when updating nette in past (it was BC at one point) and I really don't see a point of using it
Last edited by dkorpar (2017-01-05 22:39) | https://forum.nette.org/en/27776-images-linked-in-db-not-displayed | CC-MAIN-2022-33 | refinedweb | 385 | 53.1 |
At the first part of the code(Initializing relays) the GPIO output works as expected, but at the second part it doesn’t works, I have break my head for the last four hours trying to find out what is happening here…
The updateGPIOstatuses method is executed as a thread:
_thread.start_new_thread(updateGPIOstatuses,(gpioCheckerInterval,pins,))
The actual behaviour is: The second part of this code doesn’t turn on the relay from the relay board… But at the first part it does the job.
Any idea?
def updateGPIOstatuses(gpioCheckerInterval,gpioConfig): GPIO.setwarnings(False) GPIO.setmode(GPIO.BCM) gpioConfig=[19] #First part print('Initializing relays') for i in gpioConfig: print('Relay '+str(i)+' on..') GPIO.setup(i, GPIO.OUT) GPIO.output(i, GPIO.HIGH) time.sleep(0.5) GPIO.output(i, GPIO.LOW) time.sleep(0.5) print('Relay '+str(i)+' off..') print('Initialization done') #Second part while True: #statuses=dbQuery('2','') statuses={'gpio8': 0, 'gpio7': 0, 'gpio10': 0, 'gpio4': 0, 'gpio2': 0, 'gpio1': 1, 'device': '1', 'gpio5': 0, 'gpio3': 0, 'gpio9': 0, 'gpio6': 0} time.sleep(0.5) after='' print(' ') j=1 for i in gpioConfig: newState=statuses['gpio'+str(j)] j+=1 if newState==1: print(1) GPIO.output(i, GPIO.HIGH) time.sleep(0.5) else: print(0) GPIO.output(i, GPIO.LOW) time.sleep(0.5) time.sleep(gpioCheckerInterval)
EXTRA: I forget to mention, the relay phisical state changes in the first part, but not in the second one, if I check the pin state at the second part after changing the pin value I get the correct value, but it doesn’t correspond to the phisical state of the current relay.
In resume, seems like the pin is correctly settled but the relay doesn’t, and as I said, only in the second part.
- 1Your code is really confusing. You are passing in gpioConfig then setting it to [19] is that your intention? – CoderMike 8 hours ago
- 1Does HIGH or LOW turn your relay on? – CoderMike 7 hours ago
- 1setting it to 19 just for this example, normally it is passed from outside, I just want to test with one pin – Martin Ocando Corleone 7 hours ago
- 1Yes, HIGH or LOW toggles the relay in the first part but not in the second one – Martin Ocando Corleone 7 hours ago
- 1I added an extra info – Martin Ocando Corleone 7 hours ago
- 1Does HIGH turn the relay on or does LOW turn the relay on? – CoderMike 6 hours ago
- 1You should really only call setwarnings, setmode and setup once at the start of your code. – CoderMike 6 hours ago
- 1Your second section only ever sets the relay HIGH because gpio1 returns 1. – CoderMike 5 hours ago
- 5 hours ago
- 1Thank you @CoderMike for help me to see it was inverted.. – Martin Ocando Corleone 5 hours ago
- “I have break my head for the last four hours” – so you haven’t made a serious effort to debug your code? – Milliways 4 hours ago
- 4 hours ago
Question
User Requirements/Functional Specification
v0.1 – How to debug my program is less than 4 hours?
v0.2 – How to use Rpi control multiple relays?
v0.3 – How to use Rpi4B Thonny python 3.7.3 to control high logical level triggered relays?
v0.4 – How to use Rpi4B Thonny python 3.7.3 to control GPIO pins which in turn turn on and off multiple high level triggered relays?
v0.5 – How to use Rpi4B Thonny python 3.7.3, Rpi.GPIO.0.7.0/GpioZero 1.5.1 to control, interruptable/event driver/multiple processing/threading Rpi GPIO pins or MCP23017/s17 extended GPIO pins which in turn control multiple, up to 64 high level triggered 3V3/5V0 relays?
v0.6.1 – How to use GPIO/GpioZero to control gpio pin pins? v0.6.2 – How to use gpio pins to control multiple relays? v0.6.3 – How to use python list/dictionary data structure/algorithm to develop declarative style relay control systems.
Note 1 – V0.6.3 above is what the OP actually using python threading, dictionary declaration, list processing techniques, to write his program. Now I am going to use his program as a case study to explain to python newbies, how his program works, and how to guarantee successful debugging in less than 40 minutes.
Make it as simple as possible (Occam Razor Cutting) steps
Step 1 – Cutting too higher level stuff/functions, eg multi-threading
The multi-threading function to start a new thread (update GPIO pin(s) status), in not relevant to debugging at the current, lower, thread level, so should be cut first.
thread.start_new_thread (updateGPIOstatus, (updateInterval, gpioPinList,))
Reference – Python – Multithreaded Programming – tutoriaspoint
The general function definition is the following:
thread.start_new_thread (function, args[, kwargs])
Step 2 – Cutting to lower level stuff/functions (eg. GPIO initialization)
The OP uses the following statement often, and to make things not go wrong easily, it is a good idea to make it a “function”, sort of cutting lower level stuff and hide it (Info Hiding) or abstract it (ADT, Abstract Data Type).
GPIO.setup(i, GPIO.OUT) GPIO.output(i, GPIO.HIGH)
The other GPIO initialization steps, though used only once, can also be hidden in a higher level function. Depend on application, there are different ways to do the multi-level, nested function definitions. For our case study here, let me give an example program with these functions. My program for now is called gpioControl001.py, later I will uplevel to ledControl001.py, relayControl001, and so on.
/ to continue, …
Answer
Ah, let me see, I think I can very likely find the bug in less than 40 minutes,
(1) Get my Occam’s razor (KISS),
(2) Morning coffee time, ..
Update – Since @CodeMike has already given an answer acceptable to the OP, I will, nevertheless, still complete my answer, focusing on debugging skills (actually software engineering/development methodology), to encourage/comfort those newbie programmers like me, without sharp eyes and a clear/vivid mind, can still catch the nasty bug sooner or later, with guarantee that no one’s head will be broken.
Appendices
Appendix A – The code
Appendix B – KISS
KISS Principle – Wikipedia‘s .
A variant – “Make everything as simple as possible, but not simpler” – is attributed to Albert Einstein, although this may be an editor’s paraphrase of a lecture he gave.
Appendix C – Occam’s Razor
Occam’s razor – Wikipedia
Occam’s razor is the problem-solving principle that states that “Entities should not be multiplied without necessity.”
The idea is attributed to English Franciscan friar William of Ockham (1287–1347), a scholastic philosopher and theologian who used a preference for simplicity to defend the idea of divine miracles.
It is sometimes paraphrased by a statement like “the simplest solution is most likely the right one”, but is the same as the Razor only if results match.
Occam’s razor says that when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions, and it is not meant to be a way of choosing between hypotheses that make different predictions.
Similarly, in science, Occam’s razor is used as an abductive heuristic.
Appendix D – William Occam
William of Ockham (Occam, c. 1280—c. 1349) – IEP.
Categories: Uncategorized | https://tlfong01.blog/2020/01/09/gpio-control-function-v001/ | CC-MAIN-2020-40 | refinedweb | 1,212 | 50.87 |
tensorflow::
ops::
QuantizeV2
#include <array_ops.h>
Quantize the 'input' tensor of type float to 'output' tensor of type 'T'.
Summary
[min_range, max_range] are scalar floats that specify the range for the 'input' data. The 'mode' attribute controls exactly which calculations are used to convert the float values to their quantized equivalents. The 'round_mode' attribute controls which rounding tie-breaking algorithm is used when rounding float values to their quantized equivalents.
In 'MIN_COMBINED' mode, each value of the tensor will undergo the following:
out[i] = (in[i] - min_range) * range(T) / (max_range - min_range) if T == qint8: out[i] -= (range(T) + 1) / 2.0
here
range(T) = numeric_limits
MIN_COMBINED Mode Example
Assume the input is type float and has a possible range of [0.0, 6.0] and the output type is quint8 ([0, 255]). The min_range and max_range values should be specified as 0.0 and 6.0. Quantizing from float to quint8 will multiply each value of the input by 255/6 and cast to quint8.
If the output type was qint8 ([-128, 127]), the operation will additionally subtract each value by 128 prior to casting, so that the range of values aligns with the range of qint8.
If the mode is 'MIN_FIRST', then this approach is used:
num_discrete_values = 1 << (# of bits in T) range_adjust = num_discrete_values / (num_discrete_values - 1) range = (range_max - range_min) * range_adjust range_scale = num_discrete_values / range quantized = round(input * range_scale) - round(range_min * range_scale) + numeric_limits
::min() quantized = max(quantized, numeric_limits ::min()) quantized = min(quantized, numeric_limits ::max())
The biggest difference between this and MIN_COMBINED is that the minimum range is rounded first, before it's subtracted from the rounded value. With MIN_COMBINED, a small bias is introduced where repeated iterations of quantizing and dequantizing will introduce a larger and larger error.
SCALED mode Example
SCALED
mode matches the quantization approach used in
QuantizeAndDequantize{V2|V3}
.
If the mode is
SCALED
, the quantization is performed by multiplying each input value by a scaling_factor. The scaling_factor is determined from
min_range
and
max_range
to be as large as possible such that the range from
min_range
to
max_range
is representable within values of type T.
const int min_T = std::numeric_limits
::min(); const int max_T = std::numeric_limits ::max(); const float max_float = std::numeric_limits ::max();
const float scale_factor_from_min_side = (min_T * min_range > 0) ? min_T / min_range : max_float; const float scale_factor_from_max_side = (max_T * max_range > 0) ? max_T / max_range : max_float;
const float scale_factor = std::min(scale_factor_from_min_side, scale_factor_from_max_side);
We next use the scale_factor to adjust min_range and max_range as follows:
min_range = min_T / scale_factor; max_range = max_T / scale_factor;
e.g. if T = qint8, and initially min_range = -10, and max_range = 9, we would compare -128/-10.0 = 12.8 to 127/9.0 = 14.11, and set scaling_factor = 12.8 In this case, min_range would remain -10, but max_range would be adjusted to 127 / 12.8 = 9.921875
So we will quantize input values in the range (-10, 9.921875) to (-128, 127).
The input tensor can now be quantized by clipping values to the range
min_range
to
max_range
, then multiplying by scale_factor as follows:
result = round(min(max_range, max(min_range, input)) * scale_factor)
The adjusted
min_range
and
max_range
are returned as outputs 2 and 3 of this operation. These outputs should be used as the range for any further calculations.
narrow_range (bool) attribute
If true, we do not use the minimum quantized value. i.e. for int8 the quantized output, it would be restricted to the range -127..127 instead of the full -128..127 range. This is provided for compatibility with certain inference backends. (Only applies to SCALED mode)
axis (int) attribute
An optional
axis
attribute can specify a dimension index of the input tensor, such that quantization ranges will be calculated and applied separately for each slice of the tensor along that dimension. This is useful for per-channel quantization.
If axis is specified, min_range and max_range
if
axis
=None, per-tensor quantization is performed as normal.
ensure_minimum_range (float) attribute
Ensures the minimum quantization range is at least this value. The legacy default value for this is 0.01, but it is strongly suggested to set it to 0 for new uses.
Args:
-.
Public attributes
output
::tensorflow::Output output
output_max
::tensorflow::Output output_max
output_min
::tensorflow::Output output_min
Public functions
QuantizeV2
QuantizeV2( const ::tensorflow::Scope & scope, ::tensorflow::Input input, ::tensorflow::Input min_range, ::tensorflow::Input max_range, DataType T )
QuantizeV2
QuantizeV2( const ::tensorflow::Scope & scope, ::tensorflow::Input input, ::tensorflow::Input min_range, ::tensorflow::Input max_range, DataType T, const QuantizeV2::Attrs & attrs ) | https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/quantize-v2?hl=iw | CC-MAIN-2021-39 | refinedweb | 728 | 54.02 |
Mission Overview
Cluster Difference Imaging Photometric Survey (CDIPS)
Primary Investigator: Luke Bouma
HLSP Authors: Luke Bouma
Released: 2019-10-04
Updated: 2020-08-18
Primary Reference(s): Bouma et al. (2019)
DOI: 10.17909/t9-ayd0-k727
Citations: See ADS Statistics
CDIPS target star positions (blue) and nominal TESS observing footprint (gray). Target stars are either candidate members of clusters, or else have other youth indicators. Most will be observed for one or two lunar months during the TESS Prime Mission.
Overview
The TESS mission has been releasing full-frame images recorded at 30 minute cadence. Using the TESS images, the CDIPS team has begun a Cluster Difference Imaging Photometric Survey (CDIPS), in which they are making light curves for stars that are candidate members of open clusters and moving groups. They have also included stars that show photometric indications of youth. Each light curve represents between 20 and 25 days of observations of a star brighter than Gaia Rp magnitude of 16. The precision of the detrended light curves is generally in line with theoretical expectations.
The pipeline is called "cdips-pipeline", and it is available for inspection as a GitHub repository, and should be cited as an independent software reference (Bhatti et al., 2019,).
Before using the light curves, the team strongly recommends that you become familiar with the TESS data release notes, and also consult the TESS Instrument Handbook, available at MAST ().
The team has also created a catalog of target metadata, such as cluster name, cluster membership provenance, Gaia magnitudes, and parallax values. The catalog is available as a .csv file in the Data Access section.
DR1 CONTENTS
The first CDIPS data release (2019-10-02) contains 159,343 light curves of target stars that fell on silicon during TESS Sectors 6 and 7. They cover about one sixth of the Galactic plane. The target stars are described and listed in Bouma et al. 2019. They are stars for which a mix of Gaia and pre-Gaia kinematic, astrometric, and photometric information suggest either cluster membership or youth.
DR2 CONTENTS
The second CDIPS data release (2019-12-09) contains 355,380 light curves of target stars that fell on silicon during TESS Sectors 8, 9, 10 and 11. Combined with DR1, Galactic longitudes from ~190 to 320 degrees are covered, totalling about half a million stars brighter than Gaia-Rp of 16. The reduction methods used for the second release are identical to those from Bouma et al. 2019, except as noted in the CDIPS README file. Target stars have had claims of youth in the literature. Their light curves are amenable for studies in stellar and exoplanetary astrophysics.
DR3 CONTENTS
The third CDIPS data release (2020-05-07) contains 130,215 light curves of target stars that fell on silicon during TESS Sectors 12 and 13.
DR4 CONTENTS
The fourth CDIPS data release (2020-08-25) contains 26,956 light curves of target stars that fell on silicon during TESS Sectors 1 through 5. Sectors 1 through 4 look away from the galactic plane, and so there are fewer young stars than in Sectors 5-13. Some of the Orion complex is visible in Sector 5.
Data Products
Each target's light curve file is stored in a sub-directory based on the Sector it was observed in as a 4-digit zero-padded number. They are further divided into sub-directories based on the camera and chip number they are on. For example, "s0006/cam1_ccd1/" for Sector 6 light curves that are on CCD #1 on Camera #1.
The light curves are in a FITS format familiar to users of the Kepler, K2, and TESS-short cadence light curves made by the NASA Ames team. Their file names follow this convention:
hlsp_cdips_tess_ffi_gaiatwo<gaiaid>-<sectornum>-<cam-chip>_tess_v01_llc.fits
where:
- <gaiaid> = full Gaia DR2 target id, e.g., "0003321416308714545920"
- <sectornum> = 4-digit, zero-padded Sector number, e.g., "0006"
- <cam-chip> = the camera and chip numbers, e.g., "cam2-ccd4"
The catalog of target metadata is stored at the top level, and follows this format:
hlsp_cdips_tess_ffi_<sector-start>-<sector-end>_tess_v01_catalog.csv
where:
- <sector-start> is the first TESS Sector that has target light curves, e.g., "s0001"
- <sector-end> is the last TESS Sector that has target light curves, e.g., "s0013"
Data file types:
Catalog Metadata Columns
NOTE: The catalog file is delimited by semi-colon (;) characters, while commas (,) are used for some columns that have multiple values. The catalog has the following columns:
Light Curve FITS File Format
The primary header contains information about the target star, including the catalogs that claimed cluster membership or youth ("CDIPSREF"), and a key that enables back-referencing to those catalogs in order to discover whatever those investigators said about the object ("CDEXTCAT"). Membership claims based on Gaia-DR2 data are typically the highest quality claims. Cross-matches against TICv8 and Gaia-DR2 are also included.
The sole binary table extension contains the light curves. Three aperture sizes are used:
- APERTURE1 = 1 pixel in radius
- APERTURE2: = 1.5 pixels in radius
- APERTURE3 = 2.25 pixels in radius
Three different types of light curves are available. The first is the raw "instrumental" light curve measured from differenced images. The second is a detrended light curve that regresses against the number of principal components noted in the light curve's header. The third is a detrended light curve found by applying TFA with a fixed number of template stars. The recommended time stamp is "TMID_BJD", which is the exposure mid-time at the barycenter of the solar system (BJD), in the Temps Dynamique Barycentrique standard (TDB). For further details, please see Bouma et al. 2019, or send emails to the authors.
The full set of available time-series vectors is as follows:
- TTYPE1 = 'BGE ' / Background measurement error
- TTYPE2 = 'BGV ' / Background value (after bkgd surface subtrxn)
- TTYPE3 = 'FDV ' / Measured D value (see Pal 2009 eq 31)
- TTYPE4 = 'FKV ' / Measured K value (see Pal 2009 eq 31)
- TTYPE5 = 'FSV ' / Measured S value (see Pal 2009 eq 31)
- TTYPE6 = 'IFE1 ' / Flux error in aperture 1 (ADU)
- TTYPE7 = 'IFE2 ' / Flux error in aperture 2 (ADU)
- TTYPE8 = 'IFE3 ' / Flux error in aperture 3 (ADU)
- TTYPE9 = 'IFL1 ' / Flux in aperture 1 (ADU)
- TTYPE10 = 'IFL2 ' / Flux in aperture 2 (ADU)
- TTYPE11 = 'IFL3 ' / Flux in aperture 3 (ADU)
- TTYPE12 = 'IRE1 ' / Instrumental mag error for aperture 1
- TTYPE13 = 'IRE2 ' / Instrumental mag error for aperture 2
- TTYPE14 = 'IRE3 ' / Instrumental mag error for aperture 3
- TTYPE15 = 'IRM1 ' / Instrumental mag in aperture 1
- TTYPE16 = 'IRM2 ' / Instrumental mag in aperture 2
- TTYPE17 = 'IRM3 ' / Instrumental mag in aperture 3
- TTYPE18 = 'IRQ1 ' / Instrumental quality flag ap 1, 0/G OK, X bad
- TTYPE19 = 'IRQ2 ' / Instrumental quality flag ap 2, 0/G OK, X bad
- TTYPE20 = 'IRQ3 ' / Instrumental quality flag ap 3, 0/G OK, X bad
- TTYPE21 = 'RSTFC ' / Unique frame key
- TTYPE22 = 'TMID_UTC' / Exp mid-time in JD_UTC (from DATE-OBS,DATE-END)
- TTYPE23 = 'XIC ' / Shifted X coordinate on CCD on subtracted frame
- TTYPE24 = 'YIC ' / Shifted Y coordinate on CCD on subtracted frame
- TTYPE25 = 'CCDTEMP ' / Mean CCD temperature S_CAM_ALCU_sensor_CCD
- TTYPE26 = 'NTEMPS ' / Number of temperatures avgd to get ccdtemp
- TTYPE27 = 'TMID_BJD' / Exp mid-time in BJD_TDB (BJDCORR applied)
- TTYPE28 = 'BJDCORR ' / BJD_TDB = JD_UTC + TDBCOR + BJDCORR
- TTYPE29 = 'TFA1 ' / TFA Trend-filtered magnitude in aperture 1
- TTYPE30 = 'TFA2 ' / TFA Trend-filtered magnitude in aperture 2
- TTYPE31 = 'TFA3 ' / TFA Trend-filtered magnitude in aperture 3
- TTYPE32 = 'PCA1 ' / PCA Trend-filtered magnitude in aperture 1
- TTYPE33 = 'PCA2 ' / PCA Trend-filtered magnitude in aperture 2
- TTYPE34 = 'PCA3 ' / PCA Trend-filtered magnitude in aperture 3
Note: a very small number of targets fall on more than one camera-chip combination in a given Sector. In these cases, there are multiple files produced. One example is Gaia DR2 3041652034662522752 in Sector 7, which falls on both Camera 1 CCD 1 and Camera 2 CCD4, and thus has two files:
/s0007/cam1_ccd1/hlsp_cdips_tess_ffi_gaiatwo0003041652034662522752-0007-cam1-ccd1_tess_v01_llc.fits /s0007/cam2_ccd4/hlsp_cdips_tess_ffi_gaiatwo0003041652034662522752-0007-cam2-ccd4_tess_v01_llc.fits
Data Access
Catalog File
The cumulative DR4 catalog file can be downloaded directly here: hlsp_cdips_tess_ffi_s0001-s0013_tess_v01_catalog.csv
The catalog file can be used to select light curves for a specific cluster, as reported by specific authors in the literature. For example, to select all the CDIPS light curves for members of NGC 2516 reported by Cantat-Gaudin et al., 2018, one could do the following in Python (click Expand to see sample script):
import pandas as pd df = pd.read_csv('cdips_lc_metadata_20200812_S1_thru_S13.csv', sep=';') sel = ( df.reference.str.contains('CantatGaudin_2018') & df.cluster.str.contains('NGC_2516') ) df_ngc2516 = df[sel]
This yields 4992 light curves for 876 unique stars observed over the first year of TESS, with an average of ~5 sectors per star. Given the list of source_ids, MAST can then be queried for the light curves (for example, using astroquery).
Astroquery Example
CDIPS data products are available in the MAST Portal and astroquery.mast. For those who want to download light curves for a single target, or all light curves for a given Sector, see the following Python code example below. NOTE: There are tens of thousands of light curves for a given Sector, thus downloading all of the products can take the better part of a day, even with good internet connections. By default, the light curve files will be downloaded under a folder called "mastDownload" in the same working directory that your run the Python script from. Expand the box below for a sample script.
from astroquery.mast import Observations # Search for CDIPS light curves within 0.001 degrees of V684 Mon. obs_table = Observations.query_criteria(objectname="V684 Mon", radius=".001 deg", provenance_name="CDIPS") print("Found " + str(len(obs_table)) + " CDIPS light curves.") # Get list of available products for this Observation. cdips_products = Observations.get_product_list(obs_table) # Download the products for this Observation. manifest = Observations.download_products(cdips_products) # Search for CDIPS light curves directly based on TIC ID. ticid = '220314428' obs_table = Observations.query_criteria(target_name=ticid, provenance_name="CDIPS") print("Found " + str(len(obs_table)) + " CDIPS light curves.") # Get list of available products for this Observation. cdips_products = Observations.get_product_list(obs_table) # Download the products for this Observation. manifest = Observations.download_products(cdips_products) print("Done") # Get all CDIPS light curves for a given Sector, may not work # depending on bandwidth and traffic, we suggest you use bulk # download scripts instead. sector_num = '6' print('Querying for CDIPS Sector ' + sector_num + " Observations...") obsTable = Observations.query_criteria(provenance_name = "CDIPS", sequence_number = sector_num) print("Found a total of " + str(len(obsTable)) + " CDIPS targets.") print('Downloading data products for these observations...') for obs in obsTable: data_products = Observations.get_product_list(obs) Observations.download_products(data_products)
NOTE: The above query can timeout for some users, due to internet bandwidth or traffic on the database at MAST. If so, an alternative is to use the bulk download scripts, which will download products via cURL commands given the complete list of CDIPS targets for a given Sector. | https://archive.stsci.edu/hlsp/cdips | CC-MAIN-2020-50 | refinedweb | 1,794 | 52.29 |
#include <grid_out.h>
Flags describing the details of output for encapsulated postscript. In this structure, the flags common to all dimensions are listed. Flags which are specific to one space dimension only are listed in derived classes.
By default, the size of the picture is scaled such that the width equals 300 units.
Definition at line 285 of file grid_out.h.
Enum denoting the possibilities whether the scaling should be done such that the given
size equals the width or the height of the resulting picture.
Definition at line 292 of file grid_out.h.
Constructor.
Declare parameters in ParameterHandler.
Parse parameters of ParameterHandler.
See above. Default is
width.
Definition at line 300 of file grid_out.h.
Width or height of the output as given in postscript units This usually is given by the strange unit 1/72 inch. Whether this is height or width is specified by the flag
size_type.
Default is 300.
Definition at line 309 of file grid_out.h.
Width of a line in postscript units. Default is 0.5.
Definition at line 314 of file grid_out.h.
Should lines with a set
user_flag be drawn in a different color (red)? See GlossUserFlags for information about user flags.
Definition at line 321 of file grid_out.h.
This is the number of points on a boundary face, that are ploted additionally to the vertices of the face.
This is used if the mapping used is not the standard
MappingQ1 mapping.
Definition at line 330 of file grid_out.h.
Should lines be colored according to their refinement level? This overrides color_lines_on_user_flag for all levels except level 0. Colors are: level 0: black, other levels: rainbow scale from blue to red.
Definition at line 338 of file grid_out.h. | http://www.dealii.org/developer/doxygen/deal.II/structGridOutFlags_1_1EpsFlagsBase.html | CC-MAIN-2014-15 | refinedweb | 288 | 69.89 |
and Answers
These are Hadoop Basic Interview Questions and Answers for freshers and
experienced.
1..
3. How big data analysis helps businesses increase their revenue? Give
example.
Big data analysis is helping businesses differentiate themselves for example Walmart the
worlds.
Here is an interesting video that explains how various industries are leveraging big data analysis
to increase their revenue
To view a detailed list of some of the top companies using Hadoop CLICK HERE
5. Differentiate between Structured and Unstructured data.
Data which can be stored in traditional database systems in the form of rows and columns, for
example the online purchase transactions. Facebook
updates, Tweets on Twitter, Reviews, web logs, etc. are all examples of unstructured data.
1)HDFS Hadoop Distributed File System is the java based file system for scalable and reliable
storage of large datasets. Data in HDFS is stored in the form of blocks and it operates on the
Master Slave Architecture.
Here is a visual that clearly explain the HDFS and Hadoop MapReduce Concepts-
1) Hadoop Common
2) HDFS
3) Hadoop MapReduce
4) YARN
Data Management and Monitoring Components are - Ambari, Oozie and Zookeeper.
10..
We have further categorized Big Data Interview Questions for Freshers and Experienced-
Block Scanner - Block Scanner tracks the list of blocks present on a DataNode and verifies
them to find any kind of checksum errors. Block Scanners use a throttling mechanism to reserve
disk bandwidth on the datanode.
edits file-It is a log of changes that have been made to the namespace since checkpoint.
Checkpoint NodeCheckpoint Node keeps track of the latest checkpoint in a directory that has same structure as
that of NameNodes)
redundancy whereas HDFS runs on a cluster of different machines thus there is data
redundancy because of the replication protocol.
NAS stores data on a dedicated hardware whereas in HDFS all the data blocks
MapReduce cannot be used for processing whereas HDFS works with Hadoop
MapReduce as the computations in HDFS are moved to data..
We have further categorized Hadoop HDFS Interview Questions for Freshers and Experienced-
1)setup () This method of the reducer is used for configuring various parameters like the input
data size, distributed cache, heap size, etc.
3)cleanup () - This method is called only once at the end of reduce task for clearing all the
temporary files.
A new class must be created that extends the pre-defined Partitioner Class.
The custom partitioner to the job can be added as a config file in the wrapper
which runs Hadoop MapReduce or the custom partitioner can be added to the job by
using the set method of the partitioner class.
5. What is the relationship between Job and Task in Hadoop?
A single job can be broken down into one or many tasks in Hadoop.
7..
1)Shuffle
2)Sort
3)Reduce
9..
We have further categorized Hadoop MapReduce Interview Questions for Freshers and
Experienced-
3)If the application demands key based access to data while retrieving.
Zookeeper- It takes care of the coordination between the HBase Master component and the
client.
Catalog Tables-The two important catalog tables are ROOT and META.ROOT table tracks
where the META table is and META table stores all the regions in the system.
Table Level Operational Commands in HBase are-describe, list, drop, disable and scan.
4. Explain the difference between RDBMS data model and HBase data
model.
RDBMS is a schema based database whereas HBase is schema less data model.
RDBMS does not have support for in-built partitioning whereas in HBase there is automated
partitioning.
6. What is column families? What happens if you alter the block size of
ColumnFamily on an already populated database?.
1)Family Delete Marker- This markers marks all columns for a column family.
We have further categorized Hadoop HBase Interview Questions for Freshers and Experienced-
--import \
--connect jdbc:mysql://localhost/db \
--username root \.
1)Append
2)Last Modified
To insert only rows Append should be used in import command and for inserting the rows and
also updating Last-Modified should be used in the import command.
6. How can you check all the tables present in a single database using
Sqoop?
The command to check the list of all tables present in a single database using Sqoop is as
follows-.
We have further categorized Hadoop Sqoop Interview Questions for Freshers and Experienced-
Source- This is the component through which data enters Flume workflows.
Client- The component that transmits event to the source that operates with the agent.
clusters and also the novel HBase IPC that was introduced in the version HBase
0.96..
Zookeeper-client command is used to launch the command line client. If the initial prompt is
hidden by the log messages after entering the command, users can just hit ENTER to view the
prompt.
The Znodes that get destroyed as soon as the client that created it
disconnects are referred to as Ephemeral Znodes.
Pig coding approach is comparatively slower than the fully tuned MapReduce
coding approach.
Read More in Detail-
8) What is the usage of foreach operation in Pig scripts?
FOREACH operation in Apache Pig is used to apply transformation to each element in the data
bag so that respective action is performed to generate new data items.
Tuples- Just similar to the row in a table where different items are separated
by a comma. Tuples can have multiple attributes.
We have further categorized Hadoop Pig Interview Questions for Freshers and Experienced-
Release 2.4.1
We have further categorized Hadoop YARN Interview Questions for Freshers and Experienced-
4)Is it possible to change the default location of Managed Tables in Hive, if so how?
8)What is SerDe in Hive? How can you write yourown customer SerDe?
9)In case of embedded Hive, can the same metastore be used by multiple users?
Or
5)What are the modules that constitute the Apache Hadoop 2.0 framework?
We hope that these Hadoop Interview Questions and Answers have pre-charged you for your
next Hadoop Interview.Get the Ball Rolling and answer the unanswered questions in the
comments below.Please do! It's all part of our shared mission to ease Hadoop Interviews for all
prospective Hadoopers.We invite you to get involved. | https://de.scribd.com/document/306230653/Big-Data-Hadoop-Interview-Questions-and-Answers | CC-MAIN-2019-35 | refinedweb | 1,039 | 54.22 |
The first version of Windows Identity Foundation was released in November 2009, in form of out of band package. There were many advantages in shipping out of band, the main one being that we made WIF available to both .NET 3.5 and 4.0, to SharePoint, and so on. The other side of the coin was that it complicated redistribution (when you use WIF in Windows Azure you need to remember to deploy WIF’s runtime with your app) and that it imposed a limit to how deep claims could be wedged in the platform. Well, things have changed. Read on for some announcements that will rock your world!
With .NET 4.5, WIF ceases to exist as a standalone deliverable. Its classes, formerly housed in the Microsoft.IdentityModel assembly & namespace, are now spread across the framework as appropriate. The trust channel classes and all the WCF-related entities moved to System.ServiceModel.Security; almost everything else moved under some sub-namespace of System.IdentityModel. Few things disappeared, some new class showed up; but in the end this is largely the WIF you got to know in the last few years, just wedged deeper in the guts of the .NET framework. How deep?
Very deep indeed.
To get a feeling of it, consider this: in .NET 4.5 GenericPrincipal, WindowsPrincipal and RolePrincipal all derive from ClaimsPrincipal. That means that now you’ll always be able to use claims, regardless of how you authenticated the user!
In the future we are going to talk more at length about the differences between WIF1.0 and 4.5. Why did we start talking about this only now? Well, because unless your name is Dominic or Raf chances are that you will not brave the elements and wrestle with WS-Federation without some kind of tool to shield you from the raw complexity beneath. Which brings me to the first real announcement of the day.
I am very proud to announce that today we are releasing the beta version of the WIF tooling for Visual Studio 11: you can get it from here, or directly from within Visual Studio 11 by searching for “identity” directly in the Extensions Manager.
The new tool is a complete rewrite, which delivers a dramatically simplified development-time experience. If you are interested in a more detailed brain dump on the thinking that went in this new version, come back in few days and you’ll find a more detailed “behind the scenes” post. To give you an idea of the new capabilities, here there are few highlights:
Lots of new capabilities, all the while trying to do everything with less steps and simply things! Did we succeed? You guys let us know!
In V1 the tools lived in the SDK, which combined the tool itself and the samples. When venturing in Dev11land, we decided there was a better way to deliver things to you: read on!
If you had the chance to read the recent work of Nicholas Carr, you’ll know he is especially interested on the idea of unbundling: in a (tiny) nutshell, traditional magazines and newspapers were sold as a single product containing a collection of different content pieces whereas the ‘net (as in the ‘verse) can offer individual articles, with important consequences on business models, filter bubble, epistemic closure and the like (good excerpt here). You’ll be happy to know that this preamble is largely useless, I just wanted to tell you that instead of packing all the samples in a single ZIP you can now access each and every one of them as individual downloads, again both via browser (code gallery) or from Visual Studio 11’s extensions manager. The idea is that all samples should be easily discoverable, instead of being hidden inside an archive; also, the browse code feature is extremely useful when you just want to look up something without necessarily download the whole sample .
Also in this case we did our best to act on the feedback you gave us. The samples are fully redesigned, accompanied by exhaustive readmes and their code is thoroughly documented. Ah, and the look&feel does not induce that “FrontPage called from ‘96, it wants his theme back” feeling . While still very simple, as SDK samples should be, they attempt to capture tasks and mini-scenarios that relate to the real-life problems we have seen you using WIF for in the last couple of years.
Icing on the case: if you download a sample from Visual Studio 11, you’ll automatically get the WIF tools . Many of the samples have a dependency on the new WIF tools, as every time we need a standard STS (e.g. we don’t need to show features that can be exercised only by creating a custom STS) we simply rely on the local STS.
Normally at this point I would encourage you to go out and play with the new toys we just released, but while I have your attention I would like to introduce you to the remarkable team that brought you all this, as captured at our scenic Redwest location:
…and of course there’s always somebody that does not show up at picture day, but I later chased them down and immortalized their effigies for posterity:
Thank you guys for an awesome, wild half year! Looking forward to fight again at your side
A sample that is missing is for a claims aware out-of-browser Silverlight application using WS-Trust and an Active STS. Previously there was a hands-on lab Developing Identity-Driven Silverlight Applications in the Identity Developer Training Kit. Will this be updated for .Net 4.5 and added to the list of samples?
Any chance the tools get updated to work with VS2012RC?
Any news on the missing Silverlight OOB sample as commented by Remco Blok? | http://blogs.msdn.com/b/vbertocci/archive/2012/03/15/windows-identity-foundation-in-the-net-framework-4-5-beta-tools-samples-claims-everywhere.aspx | CC-MAIN-2015-35 | refinedweb | 980 | 59.53 |
IRC log of xproc on 2008-03-06
Timestamps are in UTC.
15:05:59 [RRSAgent]
RRSAgent has joined #xproc
15:05:59 [RRSAgent]
logging to
15:06:03 [Norm]
Zakim, this will be xproc
15:06:03 [Zakim]
ok, Norm; I see XML_PMWG()11:00AM scheduled to start in 54 minutes
15:26:35 [Norm]
XProc WG meets on 6 Mar at 11:00 EST,
15:47:23 [MoZ]
MoZ has joined #xproc
15:53:00 [Norm]
Meeting: XML Processing Model WG
15:53:00 [Norm]
Date: 6 March 2008
15:53:00 [Norm]
Agenda:
15:53:00 [Norm]
Meeting: 103
15:53:00 [Norm]
Chair: Norm
15:53:01 [Norm]
Scribe: Norm
15:53:03 [Norm]
ScribeNick: Norm
15:53:43 [ruilopes]
ruilopes has joined #xproc
15:55:30 [Zakim]
XML_PMWG()11:00AM has now started
15:55:37 [Zakim]
+Norm
15:55:45 [PGrosso]
PGrosso has joined #xproc
15:56:43 [Zakim]
+??P3
15:56:46 [Zakim]
-Norm
15:56:48 [Zakim]
+Norm
15:56:50 [ruilopes]
Zakim, ? is me
15:56:50 [Zakim]
+ruilopes; got it
15:57:37 [MoZ]
Zakim, what is the code?
15:57:37 [Zakim]
the conference code is 97762 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), MoZ
15:58:13 [alexmilowski]
alexmilowski has joined #xproc
15:58:30 [Zakim]
+ +95247aaaa
15:58:33 [MoZ]
Zakim, aaaa is me
15:58:33 [Zakim]
+MoZ; got it
15:59:32 [Zakim]
+ +1.415.404.aabb
15:59:48 [MoZ]
Zakim, aabb is Alex
15:59:48 [Zakim]
+Alex; got it
16:00:08 [MoZ]
Zakim, who is here ?
16:00:08 [Zakim]
On the phone I see Norm, ruilopes, MoZ, Alex
16:00:10 [Zakim]
On IRC I see alexmilowski, PGrosso, ruilopes, MoZ, RRSAgent, Zakim, MSM, Norm
16:00:27 [MoZ]
Zakim, Alex is alexmilowski
16:00:27 [Zakim]
+alexmilowski; got it
16:00:32 [MoZ]
Zakim, who is here ?
16:00:32 [Zakim]
On the phone I see Norm, ruilopes, MoZ, alexmilowski
16:00:33 [Zakim]
On IRC I see alexmilowski, PGrosso, ruilopes, MoZ, RRSAgent, Zakim, MSM, Norm
16:00:54 [Zakim]
+[ArborText]
16:01:03 [Zakim]
+Murray_Maloney
16:01:51 [Norm]
Zakim, who's on the phone?
16:01:51 [Zakim]
On the phone I see Norm, ruilopes, MoZ, alexmilowski, PGrosso, Murray_Maloney
16:03:08 [richard]
richard has joined #xproc
16:04:38 [Zakim]
+??P6
16:04:39 [richard]
zakim, ? is me
16:04:39 [Zakim]
+richard; got it
16:05:13 [Norm]
Present: Norm, Rui, Mohamed, Alex, Paul, Richard, Murray
16:05:17 [Norm]
MSM, XProc?
16:05:41 [Norm]
Topic: Accept this agenda?
16:05:41 [Norm]
->
16:05:52 [AndrewF]
AndrewF has joined #xproc
16:05:54 [Norm]
Accepted.
16:05:59 [Norm]
Present: Norm, Rui, Mohamed, Alex, Paul, Richard, Murray, Andrew
16:06:05 [Norm]
Topic: Accept minutes from the previous meeting?
16:06:05 [Norm]
->
16:06:21 [Zakim]
+??P7
16:06:27 [AndrewF]
Zakim, ? is Andrew
16:06:27 [Zakim]
+Andrew; got it
16:06:28 [Norm]
Accepted.
16:06:36 [Norm]
Topic: Next meeting: telcon 13 March 2008?
16:07:09 [Norm]
Mohamed gives probably regrets, perhaps until our respective daylight savings times align
16:07:15 [Norm]
Andrew gives regrets for next week
16:07:24 [Norm]
Topic: Last call comments
16:07:30 [Norm]
->
16:07:54 [Norm]
Topic: 58. Scope of options
16:08:08 [Norm]
->
16:08:24 [Norm]
->
16:11:01 [Norm]
Norm attempts to summarize his message 66.
16:11:20 [Norm]
Richard: So options obscure inherited options immediately.
16:12:32 [Norm]
Norm: Yes.
16:13:21 [Norm]
Richard: How is this like XSLT?
16:13:23 [Norm]
Norm: It's the same.
16:13:37 [Norm]
s/It's/The solution I conclude with in message 66 is/
16:14:08 [Norm]
Norm: Anyone think more time on the list will help?
16:14:15 [Norm]
Mohamed: No, I don't think so.
16:14:27 [Norm]
Richard: There are three different situations in which you use p:option:
16:14:33 [Norm]
...1. When calling an atomic step
16:14:40 [Norm]
...2. When declaring an atomic step
16:14:50 [Norm]
...3. In a compound step where it counts as both.
16:15:17 [Norm]
Norm: Yes.
16:15:52 [Norm]
Richard: In the calling of an atomic step, does it bind it in there as well? If you bind 'a' on atomic step, does that get used immediately.
16:16:28 [Norm]
Norm: No. That's the subtle distinction, options in atomic steps aren't declarations.
16:17:11 [Norm]
Richard: Right. So that's the expected behavior. I think that sounds like the best we can do.
16:17:25 [Norm]
Norm: Anyone else?
16:18:47 [Norm]
Alex: Could we just start over?
16:19:41 [MSM]
zakim, please call MSM-617
16:19:41 [Zakim]
ok, MSM; the call is being made
16:19:42 [Zakim]
+MSM
16:19:50 [Norm]
Some discussion of getting rid of the current parameter mechanism and using p:param and p:with-param here.
16:19:52 [MSM]
zakim, drop MSM
16:19:52 [Zakim]
MSM is being disconnected
16:19:53 [Zakim]
-MSM
16:20:30 [Norm]
Present: Norm, Rui, Mohamed, Alex, Paul, Richard, Murray, Andrew, Michael[xx:20-]
16:20:37 [MSM]
zakim, please call MSM-617
16:20:37 [Zakim]
ok, MSM; the call is being made
16:20:39 [Zakim]
+MSM
16:22:22 [Norm]
Richard: The case of options on a subpipeline, that's more like variable than parameters.
16:23:02 [Norm]
...We're left with option on an atomic step decl is like xsl:param, option on an atomic step is like xsl:with-param, and option on a compound step is like xsl:variable
16:23:07 [Norm]
Norm: Yes, I think so.
16:23:48 [Norm]
Norm: There's a sense in which I'd like to add p:variable, but I'm reluctant.
16:26:24 [Norm]
The analogy in XSLT would be:
16:26:25 [Norm]
<xsl:param
16:26:25 [Norm]
<xsl:call-template
16:26:25 [Norm]
<xsl:with-param
16:26:25 [Norm]
<xsl:with-param
16:26:25 [Norm]
</xsl:call-template>
16:27:15 [PGrosso]
ac
16:27:32 [Norm]
Richard: Renaming the things is something we can do or not do regardless of how we decide the scoping quesiton.
16:27:45 [Norm]
...We should do this now and deal with renaming later.
16:28:20 [Norm]
Richard: I agree with Michael in principle that it would be easier if we renamed things.
16:28:40 [Norm]
Michael: Why do we not have a single name proposed for all instances of calling things?
16:29:31 [Norm]
Norm: We have the design we arrived at by consensus :-)
16:30:05 [Norm]
Richard/Michael discuss how this is related to ALGOL and Lisp
16:30:25 [Norm]
Proposed: we finesse the problem and say that the options that are in scope are all of those *declared* by preceding-siblings or *declared* by ancestors.
16:30:30 [PaulG]
PaulG has joined #xproc
16:32:05 [Norm]
Michael: I like that, and I'd like to call 1 and 3 option and 2 with-option.
16:33:08 [Norm]
Richard: I think what I'd like is to rename options to parameters, so we have param, with-param and variable and the things we currently call parameters we call something else.
16:33:26 [Norm]
Norm: Absent a proposal that replaces our current parameter mechanism, I don't think that's practical.
16:34:34 [Norm]
Michael: Our existing parameters are things we hand off to black boxes. Right?
16:34:42 [Norm]
Norm: yes.
16:35:00 [Norm]
Michael: They are the name/value pairs I give to XSLT ot initialize xsl:params at the top-most level of the stylesheet.
16:35:07 [Norm]
...I don't know how this related to one stylesheet calling another.
16:35:12 [Norm]
s/related/relates/
16:36:16 [Norm]
...I'd be happy to use with-param for all of them
16:36:29 [Norm]
Norm: We have options and parameters and we need to keep those two bags separate
16:38:12 [Norm]
Some discussion of options and paramters and namespaces and lions and tigers and bears
16:38:27 [MSM]
my firefox has decided to launch a background process; i've got to kill it
16:38:35 [Norm]
Richard: Is it productive to continue talking about this here? If someone can come up with a better ansewr, I'd be delighted, but I doubt it's going to happen in the next 20 minutes.
16:38:57 [Norm]
Proposed: we finesse the problem and say that the options that are in scope are all of those *declared* by preceding-siblings or *declared* by ancestors.
16:39:27 [Norm]
Accepted.
16:39:36 [MSM]
so they are unprefixed in practice, but qualified in theory -- analogy to the QT function namespace
16:39:44 [Norm]
Topic: #83 Handling of system IDs
16:39:57 [Norm]
->
16:40:08 [Norm]
->
16:41:16 [Norm]
Norm: We really need to work this one out. I'm not sure we can do it in 20 minutes, though...
16:42:03 [Norm]
Norm: Perhaps someone would take an action to come back with a proposal.
16:42:16 [Norm]
Richard proposes Henry.
16:43:32 [Norm]
ACTION: Norm to try to get Henry to tell us the right answer.
16:44:04 [Norm]
Topic: #119 (editorial) p:directory-list
16:44:25 [Norm]
ACTION: Alex to fix p:directory-list
16:44:59 [Norm]
Topic: #124 p:log feels clunky
16:45:10 [Norm]
16:45:58 [Norm]
Norm: I'm not inclined to go there, it requires solving the mixing several streams of XML into a single document problem.
16:46:10 [Norm]
Proposed: Reject.
16:46:16 [Norm]
Accepted.
16:46:44 [Norm]
Topic: #125 The 'href' attribute
16:47:04 [Norm]
Norm: The commenter wonders why we call things that aren't hypertext references "href". The answer is precedent. So why not on xsl-formatter?
16:48:36 [Norm]
Richard: I don't think href is used for places that you're going to write *to*
16:49:04 [Norm]
Norm: Anyone feel strongly that we should resolve this inconsistency?
16:49:18 [MSM]
Norm (and MSM, silently): yes, it is, at least for XSLT 2.0 result documents.
16:50:02 [Norm]
Alex: Is this really inconsistent?
16:50:11 [Norm]
Mohamed: I think we should use href everywhere.
16:50:15 [Norm]
Alex: I agree.
16:50:30 [Norm]
Proposed: Rename uri on p:xsl-formatter to href
16:50:32 [Norm]
Accepted.
16:51:01 [Norm]
Topic: Comment #126 p:pipeline name attribute
16:52:01 [Norm]
Norm isn't sure he understands the question
16:52:23 [Norm]
Richard: We did talk about this before, if you wanted to name it for some purpose other than calling it, you might want to give it a name.
16:52:41 [Norm]
Mohamed: You could use the p:declare-step form if you wanted to name it, right?
16:52:43 [Norm]
Norm: yes
16:53:09 [Norm]
Norm: I'm not sure if that means we should allow a name on p:pipeline or nt.
16:53:16 [Norm]
s/or nt/or not/
16:53:45 [Norm]
ACTION: Norm to point out p:declare-step to the commenter and see if they're satisfied.
16:53:58 [Norm]
Topic: #109 REsponse headers in p:http-request
16:54:03 [Norm]
s/REs/Res/
16:54:11 [Norm]
Norm: Alex, this is on your radar?
16:54:30 [Norm]
Alex: I already have an action to fix this.
16:54:51 [Norm]
...There's nothing to do there accept remove the content-type restriction.
16:55:00 [Norm]
Norm: Ok, reply to the message when you check in the changes.
16:55:21 [Norm]
Topic: Any other business?
16:55:27 [Norm]
None. Adjourned.
16:55:32 [Zakim]
-Murray_Maloney
16:55:51 [Zakim]
-MSM
16:55:53 [Zakim]
-Andrew
16:55:54 [Zakim]
-ruilopes
16:55:54 [Zakim]
-PGrosso
16:55:55 [Zakim]
-MoZ
16:55:55 [Zakim]
-richard
16:55:59 [PaulG]
PaulG has left #xproc
16:56:01 [Norm]
Zakim, who's on the phone?
16:56:01 [Zakim]
On the phone I see Norm, alexmilowski
16:56:45 [Norm]
ACTION: Norm to investigate parameters to sha1 for p:hash
16:57:41 [Norm]
RRSAgent, set logs world-visible
16:57:46 [Norm]
RRSAgent, draft minutes
16:57:46 [RRSAgent]
I have made the request to generate
Norm
17:05:55 [Zakim]
-Norm
17:05:56 [Zakim]
-alexmilowski
17:05:57 [Zakim]
XML_PMWG()11:00AM has ended
17:05:58 [Zakim]
Attendees were Norm, ruilopes, +95247aaaa, MoZ, +1.415.404.aabb, alexmilowski, PGrosso, Murray_Maloney, richard, Andrew, MSM
19:06:51 [Norm]
Norm has joined #xproc
19:12:26 [Zakim]
Zakim has left #xproc
20:29:10 [Norm]
Norm has joined #xproc | http://www.w3.org/2008/03/06-xproc-irc | CC-MAIN-2015-35 | refinedweb | 2,206 | 78.28 |
I want to create a batch file that does the following:
Scan a folder's contents including subfolders for movie media
Parse from the file name, the movie name and the year, since I am using theRenamer to rename all my movies in that format, it shouldn't be too hard.
Then send the movie title and year to an api like and retrieve the json data, and store it into variables.
Then I will work with atomicparsely, to set the new data if populate to the movie file's metadata.
IF you can help with one part of this, I will appreciate it.
Thank you kindly.
What you are asking is a bit much for a simple batch file (i'm assuming you are using windows yes?) espectialy when you said that you want to be able to fetch data from the web. The easiest approach to this would probably be to use a a scripting language like python. Have the batch file simple kick off the program. Python has a lot of ready made libraries for helping you do exactly what you are asking for there.
But, and i'm not 100% certain on this as I'm not a windows batch guru, but I don't think what you are asking is even possible in windows batch. each line of a batch script is an entirely independent command and so it is hard to share information needed to do what you are describing. It could probably be done with Linux Bash files, but that would probably take far more effort than just using a language that is designed for that sort of thing. batch and bash really aren't.
some example code in python
import os
subs = os.listdir(path_to_your_files)
Then make some function that finds all the movies in that folder, and its sub folders recursively then it isn't hard to parse out the information from the name as you want to do.
An IMDB python api can be found here
I'm not familiar with atomicsparsely, but if worse came to worse you could make the commandline calls you need as python strings and then call them from python. not fantastic, but it would certainly get the job done.
asked
4 years ago
viewed
624 times
active | http://superuser.com/questions/363744/how-to-create-a-batch-file-to-add-metadata-to-movies-and-rename-them | CC-MAIN-2016-30 | refinedweb | 383 | 77.06 |
Some background:
I am trying to learn Java because it is fun. I have grasp the basic and are now trying to make a simple UI.
The problem:
I have created a simple adding program that adds value 1 and value 2 into a sum when you push the button "add". Though, it do not do that. Upon pressing "Add" nothing happens.
I have checked the code agains the book and looking through the code ten times. No errors. The programs runs fine but it do not work.
The code:
//This is a test program for a UI //using JFrame and a simple adding code //taking the value of first number and sum it up with //the second value //Example: Number 1: 4 Number 2: 4 The sum: 8 //What is it that I am missing in the code? //Code from the book "Java Programming 24-hour trainer" //ISBN 978-0-470-88964-0 import javax.swing.*; import java.awt.FlowLayout; public class SimpleCalculator { public static void main(String[] args){ //Create a panel JPanel windowContent = new JPanel(); //Set a layout manage"); //Add the panel to the top-level container frame.setContentPane(windowContent); //set the size and make the window visible frame.setSize(444,444); frame.setVisible(true); } }
The computer:
I have an IBM ThinkPad T43 using an Intel Centrino 740 processor running at 1.8 GHz. RAM is 1.5 GB.
The operating system is Windows 7 Professional 32-bit.
The Java is, according to CMD:
java version "1.6.0_30"
Java SE Runtime Environment (build 1.6.0_30-b12)
Java HotSpot Client VM (build 20.5-b03, mixed mode, sharing)
javac 1.6.0_30
The question:
What is it that I am missing in the code? What have I done wrong?
Thank you for looking at my thread.
| http://www.dreamincode.net/forums/topic/262201-what-is-it-that-i-am-missing/ | CC-MAIN-2016-36 | refinedweb | 299 | 75.3 |
Greetings, I'm getting an error in my code where I include a header file in more than one location and I get the LNK2005 error during linking. I know that in C a code block is needed around the header file so that it is only included once but I forgot what that code block looks like (I'm used to C++ where this problem doesn't occur).
Isn't it something like?
Am I close? It's been a whileAm I close? It's been a whileCode:#ifndef HEADER #define HEADER //header code //more header code #ifdef HEADER #endif HEADER | http://cboard.cprogramming.com/c-programming/84821-lnk2005-object-already-defined-error-not-library-conflict.html | CC-MAIN-2015-48 | refinedweb | 102 | 80.82 |
NeXT Computers
Log in to check your private messages
So I wanted to look at doing a cross compiler to NS
NeXT Computers Forum Index
->
Porting New Software
View previous topic
::
View next topic
Author
neozeed
Joined: 15 Apr 2006
Posts: 716
Location: Hong Kong
Posted: Mon Aug 10, 2015 7:47 am
Post subject: So I wanted to look at doing a cross compiler to NS
I found these disk images:
Code:
-rw-r--r-- 1 jsteve staff 2949166 4 Nov 2002 gnu2.1_1of5.diskimage
-rw-r--r-- 1 jsteve staff 2949166 4 Nov 2002 gnu2.1_2of5.diskimage
-rw-r--r-- 1 jsteve staff 2949166 4 Nov 2002 gnu2.1_3of5.diskimage
-rw-r--r-- 1 jsteve staff 2949166 4 Nov 2002 gnu2.1_4of5.diskimage
-rw-r--r-- 1 jsteve staff 2949166 4 Nov 2002 gnu2.1_5of5.diskimage
And as you can see they are 46 bytes too big.
So I dd to skip the 46 'hopeful' extra bytes of this weird headder
Code:
00000000 00 00 00 01 02 00 00 00 00 50 00 00 00 03 00 00 |.........P......|
00000010 00 07 00 00 00 03 00 2d 00 00 00 00 00 01 00 00 |.......-........|
00000020 02 00 02 00 00 00 00 24 1b 53 00 00 16 80 |.......$.S....|
0000002e
And I get what looks like a NeXTSTEP filesystem!
Code:
00000000 64 6c 56 33 00 00 00 00 00 00 00 00 32 2e 31 5f |dlV3........2.1_|
00000010 47 4e 55 5f 53 6f 75 72 63 65 20 23 32 00 00 00 |GNU_Source #2...|
00000020 00 00 00 00 00 00 00 00 c9 b9 d7 a5 53 6f 6e 79 |............Sony|
00000030 20 4d 50 58 2d 31 31 31 4e 20 35 37 36 30 2d 35 | MPX-111N 5760-5|
00000040 31 32 00 00 72 65 6d 6f 76 61 62 6c 65 5f 72 77 |12..removable_rw|
00000050 5f 66 6c 6f 70 70 79 00 00 00 00 00 00 00 04 00 |_floppy.........|
00000060 00 00 00 02 00 00 00 12 00 00 00 50 00 00 01 2c |...........P...,|
00000070 00 60 00 00 00 00 00 00 00 00 00 00 ff ff ff ff |.`..............|
00000080 00 00 00 20 66 64 6d 61 63 68 00 00 00 00 00 00 |... fdmach......|
00000090 00 00 00 00 00 00 00 00 00 00 00 00 6c 6f 63 61 |............loca|
000000a0 6c 68 6f 73 74 00 00 00 00 00 00 00 00 00 00 00 |lhost...........|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 61 62 00 00 |............ab..|
000000c0 00 00 00 00 0d e0 20 00 04 00 74 00 00 20 10 00 |...... ...t.. ..|
000000d0 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
000000e0 00 00 01 34 2e 33 42 53 44 00 00 00 ff ff ff ff |...4.3BSD.......|
000000f0 ff ff ff ff ff ff ff ff 00 00 ff ff ff ff ff 00 |................|
So I try to mount it, and.. no go.
Quote:
Jasons-MacBook-Air:next jsteve$ ufs --dmg 2 --type nextstep /tmp/x
Jasons-MacBook-Air:next jsteve$ ls /tmp/x
2.1_GNU_Source.pkg
Jasons-MacBook-Air:next jsteve$ cat /tmp/x/2.1_GNU_Source.pkg/2.1_GNU_Source.info
Descriptionlds wThis package contains source code for NeXT's Release 2.1 GNU-based development tools. Use of this software is governed by the GNU General PubliJasons-MacBook-Air:next jsteve$ Source #%dand floppy locations
Jasons-MacBook-Air:next jsteve$ cp /tmp/x/2.1_GNU_Source.pkg/2.1_GNU_Source.tar.Z.2 tmp
cp: /tmp/x/2.1_GNU_Source.pkg/2.1_GNU_Source.tar.Z.2: Device not configured
cp: /tmp/x/2.1_GNU_Source.pkg/2.1_GNU_Source.tar.Z.2: could not copy extended attributes to tmp/2.1_GNU_Source.tar.Z.2: No such file or directory
Jasons-MacBook-Air:next jsteve$ ls -lh tmp/
total 2048
-r--r--r-- 1 jsteve staff 1.0M 10 Aug 22:45 2.1_GNU_Source.tar.Z.2
And of course NeXTSTEP on Previous with a SCSI floppy freaks out saying the filesystem is INVALID, and the only step to fix it is to format.
Does anyone have any idea what is going on??!
Better yet, does anyone have the GNU source packages for NS 0.8/0.9/1.0/2.0/3.x??
_________________
# include <wittycomment.h>
Back to top
t-rexky
Joined: 09 Jan 2011
Posts: 283
Location: Snowy Canada
Posted: Tue Aug 18, 2015 9:59 am
Post subject:
I started working on a cross-compiler of my port of gcc-4.6.3 (I think) running on PPC OS X 10.5. I got it to the point where it would compile stuff into object files and I could link those object files on my NS3.3 box into working executables. The biggest challenge is to create a cross development path, move all the libraries and headers into it and make sure that the cross compiler tools all know to use it...
I have unfortunately no time at the moment to continue any of this work...
Back to top
Display posts from previous:
All Posts
1 Day
7 Days
2 Weeks
1 Month
3 Months
6 Months
1 Year
Oldest First
Newest First
NeXT Computers Forum Index
->
Porting New | http://www.nextcomputers.org/forums/viewtopic.php?p=21430&sid=db95449dbff2d74be745f794d400dc95 | CC-MAIN-2018-26 | refinedweb | 903 | 79.9 |
With the release of loggerfs 0.5 another major milestone has been reached. Thanks to joined efforts with Vlad there is now some good documentation on how to install and use loggerfs! Checkout for the complete scoop. In addition to web-based documentation, we've also added man-pages, MySQL caching, various improvements and I managed to create .deb packages for those that use an OS that supports the debian package manager.
I've added some minor features (support for fuse-options, setting owner permissions) and made some performance enhancements. It's been tested in a production environment for a while now and it's been working great for us. If anybody needs help deploying loggerfs feel free to drop me a line or goto .
After further testing I've added to loggerfs the ability to cache PostgreSQL
database connections. Unlike MySQL, there seems to be a signifcant overhead in
creating a PostgreSQL database connection. I determined this by trying to
import a 30Mb authlog file, which took ~20min to import w/out caching. After
adding the caching function it took ~2min on my system. MySQL didn't have any
of these problems and only required ~1min to import the entire authlog, though
it did use ~100% CPU while PostgreSQL only used ~45% (I have a hunch the
PostgreSQL DB on my dev machine isn't properly configured). And in order to do
the testing, loggerfs now first splits the input buffer by newlines. That means
you'll be able to feed existing log files into loggerfs by typing:... read more
Many new features have been addded since 0.1, most notably
I have added support for storing log files in MySQL! For my personal use,
I have continued using a PostgreSQL database which means that any feedback
on MySQL systems would be greatly appreciated. I'm very curious how it will
perform when large amounts of log files are stored. But there's much more
to this release! I've created 2 simple shell scripts: createlog and
loggerfs-reload. The former asks you a series of questions and then creates
the correct XML in the logs.xml configuration file (default location is
/usr/local/etc/loggerfs/logs.xml). The latter script can be called to reload
the configuration files (schema.xml and logs.xml) while the file system is
mounted (this was already included in the 0.1.1 release, check the news for
more details on it). Finally, I improved support for the Syslog logging format,
and have added the ability to store PostgreSQL logs as well. If there are
additional formats that you would like loggerfs to support, please don't
hesitate to send me an email. If you're unsure how to setup loggerfs, feel
free to either send me an email, drop by the forums or watch out for a tutorial
I'll be posting next week on installing and configuring loggerfs.
I fixed a minor bug where it wouldn't look at the right configuration
directory if --prefix wasn't given during the ./configure step. But more
importantly, I added the ability to reload the configuration files while
the file system is mounted! What this means is that now you won't need to
unmount and re-mount the file system every time you want to add/ delete
another log file or schema. Should make using and debugging the system a
whole lot easier. Enjoy.
Today marks the first release of the loggerfs virtual file system. loggerfs
is a fuse-based virtual FS written in C++ that allows log files of apache,
squid and other programs to be directly stored in a database. Instead of
running a cronjob every few minutes/ hours/ days to process a log file,
loggerfs listens for write-requests on the log files and automatically sends
the new data to the PostgreSQL database. There are 2 configuration files for
this project:... read more | http://sourceforge.net/p/loggerfs/news/?source=navbar | CC-MAIN-2015-35 | refinedweb | 653 | 63.9 |
A few weeks ago I was trying to implement a Bar and Pie Chart for a report in a web application. I found that most of the charting solutions on the web cost an arm and a leg. So I decided to have a bash at creating my own one.
I have been reading through the MCTS Application Development Foundation book and found a couple of chapters on using System.Drawing namespace to output graphics and create Pie Charts in a C# application. Great stuff! However, I encountered a problem when my Chart was rendered within a web page that contains other HTML content. For some reason there was no HTML in my page and all that was displayed was my Chart.
This is how I wanted my chart to be inserted into my page:
However, when my charting code was added, my page looked like this:
After investigating this problem further it seems that when you output the chart image to a stream the whole page is rendered as an image which removes all the HTML. For example:
Response.ContentTye = "image/gif"; //MIME type Bitmap.Save(Response.OutputStream, ImageFormat.Gif);
In order to get around this problem required quite a strange work around:
- In the page where you need to the chart to be displayed (we will call Report.aspx) add an ASP Image control that will link to an .aspx page that will contain your chart. Things will become more clearer in the next step.
<asp:Image
- Create a new ASP.NET page that will contain all the code for your chart (we will call BarChart.aspx). Now you might be thinking how can I send the figures to the chart? Well this can be done be using Session variables or parameters within the web page link that you used in your ImageUrl in the Report.aspx page.
using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Drawing; using System.Drawing.Drawing2D; using System.Drawing.Imaging; public partial class BarChart : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { try { List<string> Questions = new List<string>(); List<float> Values = new List<float>(); //Check the session values have values if (Session["Sections"] != null && Session["SelfAverageValue"] != null) { Questions = (List<string>)Session["Sections"]; Values = (List<float>)Session["SelfAverageValue"]; } Bitmap imageBitmap = new Bitmap(600, 285); Graphics g = Graphics.FromImage(imageBitmap); g.SmoothingMode = SmoothingMode.AntiAlias; g.Clear(Color.White); Brush[] brushes = new Brush[5]; brushes[0] = new SolidBrush(Color.FromArgb(255, 216, 0)); brushes[1] = new SolidBrush(Color.FromArgb(210, 219, 252)); brushes[2] = new SolidBrush(Color.FromArgb(0, 127, 70)); brushes[3] = new SolidBrush(Color.FromArgb(0, 148, 255)); brushes[4] = new SolidBrush(Color.FromArgb(190, 99, 255)); int xInterval = 70; int width = 60; float height = 0; //Draw the Pie Chart for (int i = 0; i < Values.Count; i++) { height = (Values[i] * 40); // adjust barchart to height of Bitmap //Draws the bar chart using specific colours g.FillRectangle(brushes[i], xInterval * i + 50, 260 - height, width, height); //Draw legend g.FillRectangle(brushes[i], 420, 25 + (i * 50), 25, 25); g.DrawString(Questions[i], new Font("Arial", 8, FontStyle.Bold), Brushes.Black, 450, 31 + (i * 50)); // Draw the scale g.DrawString(Convert.ToString(Math.Round(Convert.ToDecimal(Values[i]), 2)), new Font("Arial", 10, FontStyle.Bold), Brushes.Black, xInterval * i + 45 + (width / 3), 300 - height); // Draw the axes g.DrawLine(Pens.Black, 40, 10, 40, 260); // y-axis g.DrawLine(Pens.Black, 20, 260, 400, 260); // x-axis } Response.ContentType = "image/gif"; imageBitmap.Save(Response.OutputStream, ImageFormat.Gif); imageBitmap.Dispose(); g.Dispose(); } catch { } } }
- Go back to Report.aspx page and add the code to parse your values in a Session.
//Some code that carried out calculations //Calculated the averages float selfAverageTotal = selfAssessValue / numberOfSections; float otherAverageTotal = otherAssessValue / numberOfSections; //Add generic list List<string> questions = new List<string>(); //To store the names of x and y axis List<float> averages = new List<float>(); //To store the values questions.Add("Self Average Total"); averages.Add(selfAverageTotal); questions.Add("Other Average Total"); averages.Add(otherAverageTotal); //Parse lists to session variables Session["Questions"] = questions; Session["AverageValue"] = averages;
So the idea of this is that the Chart.aspx will just the render our chart and we don't care if the HTML gets wiped in this web page since we only want the image.
You might be thinking: Why didn't you use a User Control? Well this is one of the first things I tried when trying to resolve this issue which I believe would have been a nicer implementation. Unfortunately, my report page HTML still got rendered as an image.
If anyone knows a better way to output a chart to a webpage, then please leave a comment! Thanks!
Oh yeah, and here is what my Bar Chart looked liked by using the above code:
| https://www.surinderbhomra.com/Blog/2008/10/16/Outputting-Custom-Made-Charts-To-An-ASPNET-Page | CC-MAIN-2020-34 | refinedweb | 817 | 50.63 |
From: Tobias Schwinger (tschwinger_at_[hidden])
Date: 2005-06-14 13:54:14
David Abrahams wrote:
> Tobias Schwinger <tschwinger_at_[hidden]> writes:
>>is_function_type<Tag,T> is an MPL Integral Constant.
>
>
> You don't say that anywhere.
>
Well, that's what the "member '...' notation" was supposed to mean:
... - See the MPL Integral Constant concept.
>>>> an element of the set of types specified by Tag. Member function
>>>> pointers may be more const-volatile qualified than specified,
>>>
>>>
>>>Hmm, have you looked carefully at the use cases for this? Member
>>>functions often have counterintuitive is-a relationships. For
>>>example, a base class member pointer is-a derived class member
>>>pointer, but not vice-versa. It's not obvious to me that any is-a
>>>relationships should hold here. What are the use cases?
I'll have to add an annotation here: I am looking at this from the perspective
of pointer <-> object comparison: More CV qualification of the member function
pointer is always OK, as it is OK to call a const member function on a non-const
object. You need a pointer to an at least const qualified member function if the
object is const.
>>I'm thinking about removing cv indication from the tags and cv-qualifying the
>>class type, instead (see comments in the examples interpreter.hpp : line 99 and
>>function_closure.hpp : line 120).
>
>
> Not gonna look at the source, sorry. No time.
Never mind. It refers to a piece of code (to be more specific to a comment in
front of one), which we'll get rid of in many places, when using a qualified
class type to describe cv qualification of member function pointers:
// Metafunction to decorate class type with cv-qualifiers.
// ?! Should this perhaps be the default behaviour of function_type_class ?!
template< typename MemFunPtr > struct class_of //...
I believe we basically agree on this point (however I'm not sure about the
reference part of your suggestion in this direction - see below).
>>>>
>>>> value - Static constant of type size_t, evaluating to the number
>>>> of parameters taken by the type of function T describes. The hidden
>>>> this-parameter of member functions is never counted.
>>>
>>>
>>>That particular behavior does not match the needs of any use cases
>>>I've seen. I always want to treat the hidden this parameter as the
>>>function's first parameter (actually as a reference to the class, with
>>>cv-qualification given by that of the member function itself). What's
>>>the rationale?
>>
>>The rationale is that there is no unified call syntax for member function
>>pointers and non-member/static function pointers, anyway:
>
>
> Yes, but I typically build something that *does* have a common call
> syntax when wrapping member functions, i.e. I build a function object.
> I would prefer if the metafunction would reflect its signature rather
> than forcing me to assemble it from this nonuniform blob.
>
It seems to me you are requesting to optimize a generic library for your
favourite, specific use case. And honestly, I'm not even convinced this very
case will end up more optimal. Let me try to show you why:
So you are basically talking about an implementation of boost::mem_fn. Let's
start with this passage of a hypothetical and naive implementation:
// ...
template<typename MemFunPtr>
struct mem_fn_wrap2
: mem_fn_base<MemFunPtr>
// ^^^ holds protected member MemFunPtr val_mem_fun_ptr
{
typename function_type_result<MemFunPtr>::type
operator() ( typename function_type_class<MemFunPtr>::type & c
, typename function_type_parameter_c<MemFunPtr,0>::type p0
, typename function_type_parameter_c<MemFunPtr,1>::type p1 )
{
return (c.*this->val_mem_fun_ptr)(p0,01);
}
};
1. This is inefficient, we would usually want to optimize forwarding by
substituting by-value parameters with const references except for scalar types
(like char, int, float, pointers).
2. Now let's assume we "unify" the parameters: our class type becomes the 0th
parameter
3. The above requires we use a reference for the class type so it can pass
through the metafunction that implements forward optimization (without an effect
but also without having to handle a special case)
4. Now we want to add an overload that takes a pointer to the context object
instead of a reference
^^^^^^ ((<<BOOM>>))
A slight change of our scenario turns our code into a cryptic mess (especially
when using a systematic way to generate it - i.e. Boost.Preprocessor).
=> This usually indicates our design is neither generic nor too well-chosen.
=> Conclusion: Class types _are_ special. The design should reflect this.
>>Typically we use a cascade of template specializations that does the actual
>>invocation for different function arities.
>
>
> Whaa?
>
> I don't have any "cascading" template specializations in Boost.Python
> AFAIK. I don't think Boost.Bind does either.
>
Sorry for not being very precise, here.
If it's not template specialization it's overloading or numbered functions or
functions in numbered templates or classes (hope I got them all, now)...
>>Only counting "real" parameters here allows us to build such a cascade for
>>arities ranging from zero to "OUR_MAX_ARITY_LIMIT" without having to deal with
>>the two special cases that there are no nullary member functions and that we
>>need to go up to OUR_ARITY_LIMIT+1 for member function invocations if the
>>context reference is taken into account.
>
>
> Okay, I've dealt with that issue. But it's a (really) minor one.
> Generating these things with the preprocessor should usually be done
> with vertical repetition (you'll get lousy compile times and no
> debuggability otherwise), which makes that sort of iteration bounds
> adjustment trivial.
>
This sounds like using two (slow preprocessor-) loops in client code for members
and non-members, where using one would be appropriate...
I agree it can be "minor", but I currently fail to see it's "less minor" than
your complaint in the first place.
I'm having some trouble understanding the second part of the above paragraph.
What could "otherwise" possibly refer to in this context ? And isn't vertical
repetition the slowest form of PP repetition there is ? Can you perhaps help me
with it ?
>>Think about generating the (hand-written) cascade in
>>
>> libs/function_types/example/interpreter.hpp : line 299
>>
>>with Boost.Preprocessor, for example.
>
>
> Sorry, no time to look at the code. But anyway, I've been there.
> It's easy.
>
OK. The point was that you can get away with one single and uniform, vertical
repetition for both member- and freestanding/static functions. E.g:
template<size_t Arity> struct invoker;
template<> struct invoker<0>
{
// invoker function for functions
// invoker function for member functions
};
template<> struct invoker<1>
{
// invoker function for functions
// invoker function for member functions
};
//...
template<> struct invoke< OUR_MAX_ARITY > ...
(This is a simplified pseudo-code version of what I was referring to.)
>>Even if completely binding a function (which I believe is a rather
>>seldom use case), we still need a distinction between a member and
>>non-member function pointer which won't go away no matter how we put
>>it.
>
>
> I agree with that but don't see the relevance.
>
The relevance is, that the member function pointer will need special treatment
anyway, sooner or later. We should also leave it its special properties (like
habing a class type).
There is no point in providing (an oversimplified, as shown above, for that
matter) simplification to treat member function pointers and other function
types more similar - because they are not similar!
>>Further we can still take the size of function_type_signature (well,
>>in this case the result type is counted as well).
>
>
> ?? You mean sizeof()? or mpl::size<>?
>
mpl::size
>>And parameters indices should be consistent with the function arity.
>
>
> I agree with that, but don't see the relevance. In my world, the arity of
>
> int (foo::*)(int)
>
> is 2.
>
You can look at it this way and I do not doubt it's a useful model for several
cases. However, I think it's not very true to the nature of things, because:
1. Noone should put functions into classes arbitrarily. The context reference
refers to function's primary working environment which is semantically different
from its parameters.
2. As mentioned before, there is a syntactical separation in the invocation.
3. The context reference is often passed in a CPU register instead of the stack.
So there is even a technical separation.
>>Further it makes client code more expressive.
>
>
> On what do you base that claim?
>
The separation between parameters and context reference is part of the language
so the average programmer will think of a parameter as being something declared
within a comma-separated list in parentheses following the function's name.
>>I guess it's your turn here to prove my design is faulty ;-).
>
>
> It's not so much faulty as needlessly irregular. I believe that will
> be an inconvenience in some real applications, and at the very least
> will cost more template instantiations than necessary.
For maximum efficiency (== minimum number of template instantiations) there is:
function_type_signature<T>::types
>>If you can convince me, I'll happily change things, of course.
>
>
> How'd I do?
>
I was trying to say that I'm not pedantic about this issue, but:
Rationale: "It's so, because it's Dave's taste"
is too weak, IMO ;-).
But let's not argue too much on this and suspend this discussion until after the
proof - probably I see things differently, then. Like the idea ?
> [...]
>
> But the latter doesn't look like a "use" of function_type_signature at
> all! Anyway, don't clarify your meaning for mehere; propose a
> documentation fix. Just look very carefully at the words you used
> (like "use," "primary interface," "white box," etc.), and consider how
> they might be (mis)interpreted by someone who doesn't already know
> what you're talking about.
>
It should state that 'function_type_signature' is used to implement all the
other inspection components (it is very important to give the reader this
insight, I figure).
The latter should be preferred (and thus recommended) in general, because they
make client code more readable and provide error checking.
The possibility to make direct use the type members of 'function_type_signature'
is mainly for optimization and for getting the represented type of a modified
sequence.
^^ Not literally for the docs, but it's about what this paragraph should say.
Hope no fuzzy terms are left.
>>> [... function_type_signature ]
>>>
>>>So here you are using the form I like, where class types are treated
>>>just like any other (should be a reference, though). Why not do this
>>>uniformly.
>>>
>>
>>See above.
>
>
> Still waiting for a satisfactory answer. Seems to me that your
> library only gets easier to use (and learn!) if it traffics in one
> uniform structure.
>
Oh - looks like our favourite (== only?) disagreement again:
Any template, except 'function_type_signature' and 'function_type' should be
intuitive and straightforward enough to hardly ever require a reader of client
code to look into the documentation of this library.
'function_type_parameter*' reflects its paramter list, 'function_type_signature'
reflects its signature (where the definition of what's part of the signature
should be in the docs). That hard?
> [...]
>
> Well, I suggest you say something that the average dummy is more
> likely to understand, like, "you better #include <whatever> or this
> won't compile."
;-). Probably a bit too informal to copy it literally, but I like the direction.
>> [...]
>>
>>It used to be a bit more understandable before I removed a rather
>>strange feature to grab the signature from a class template's
>>parameter list.
>
> I find it hard to believe that it was easier to understand when there
> was an *additional* feature in its behavior!
>
The other was its counterpart (in a different template)...
>>I will just remove this as well.
>
> That sounds like it goes in the right direction.
;-)
>>>Also, special-case "if" clauses, especially those that test whether
>>>something matches a concept (like "is_sequence") tend to destroy
>>>genericity.
>>>
>>
>>Would you mind explaining this in some more detail?
>
>
> Take boost::variant, which (at one time -- still?) would accept up to
> N arguments describing the held types, *OR* an MPL sequence of the
> types. The problem happens when your generic code wants to create a
> variant of one element that happens to be an MPL sequence type. Now
OK - understood.
> you need a special case that sticks that sequence in another
> sequence. Well, in that case the sequence interface is the only one
> of any use to you, isn't it?
>
> The point is that switching on a type's properties often introduces
> nonuniformity of semantics, which hurts genericity. I didn't analyze
> your case to see if it was a problem here in particular.
>
Not really, as the properties "MPL-Sequence" and "function type" should be
mutually exclusive.
Well, you can use any type which is not a function type, but it won't make sense
(and will result in a compile error) - if this type happens to be a sequence
it will "work" again. This should be acceptable as long as this behaviour is
documented, I guess.
> - Improve uniformity, if I can get it. I would probably accept the
> lib without that change, but I fear I will grumble every time I use
> it.
Well, let's see... We should be careful, here.
>
>>- change docs
>>
>>I hope (but am not entirely sure) it's all doable within the review
>>period. Any hints on a preferred prioritization?
>
>
> 1. Proof it
> 2. Post updated docs that include the proposed naming changes
> 3. Change the code.
>
> But I'm not too particular about it. I think the proof should come
> first because I'd happily accept assurances in place of steps 2 and 3
> happening before the review period ends.
Thanks again for your help.
Regards,
Tobias
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2005/06/88454.php | CC-MAIN-2019-13 | refinedweb | 2,261 | 56.76 |
Only way I could get this to work is to copy the files: clr.pyd, nPython.exe, Python.Runtime.dll to the directory c:\Python27 (rather than c:\Python27\DLLs). Does anyone know why this would not work? Using sys.path.append() while keeping the files elsewhere also doesn't work. On Thu, Mar 13, 2014 at 8:57 AM, Jonno <jonnojohnson at gmail.com> wrote: > Also when using clr.pyd from > pythonnet-2.0-Beta0-clr4.0_140_py27_UCS2_x86.zip I cannot import clr. I get > the following error: > "dynamic module not initialized properly" > I'm on CPython 2.7.5, Win7 32bit. > Can anyone suggest what might be causing this? I have .NET Framework 4.5 > installed. > > > On Wed, Mar 12, 2014 at 2:33 PM, Jonno <jonnojohnson at gmail.com> wrote: > >> Is it possible to compile the 2.0 Beta version containing the >> DocStringAttribute for clr version 2.0 or is there some incompatibility? >> >> I'm not familiar with how to build the pythondotnet source. >> >> >> On Wed, Mar 12, 2014 at 11:59 AM, Jonno <jonnojohnson at gmail.com> wrote: >> >>> My mistake Tony, >>> >>> I was using the 2.0 CLR version of pythondotnet which doesn't have the >>> DocStringAttribute class. >>> >>> >>> On Wed, Mar 12, 2014 at 9:46 AM, Tony Roberts <tony at pyxll.com> wrote: >>> >>>> Hi, >>>> >>>> have you added the Python.Runtime to your project references? Take a >>>> look at the Python.Test project that's used by the unit tests if you're not >>>> sure how to set up your project. >>>> >>>> cheers, >>>> Tony >>>> >>>> >>>> On Wed, Mar 12, 2014 at 2:33 PM, Jonno <jonnojohnson at gmail.com> wrote: >>>> >>>>> Thanks Tony, >>>>> >>>>> This is probably my ignorance of C# but I get the following error >>>>> using the same syntax as the example: >>>>> >>>>> The type or namespace name 'DocStringAttribute' could not be found >>>>> (are you missing a using directive or an assembly reference?) >>>>> >>>>> I have the: >>>>> using Python.Runtime >>>>> statement. >>>>> >>>>> >>>>> On Fri, Mar 7, 2014 at 12:19 PM, Tony Roberts <tony at pyxll.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> _________________________________________________ >>>>>> Python.NET mailing list - PythonDotNet at python.org >>>>>> >>>>>> >>>>> >>>>> >>>>> _________________________________________________ >>>>> Python.NET mailing list - PythonDotNet at python.org >>>>> >>>>> >>>> >>>> >>>> _________________________________________________ >>>> Python.NET mailing list - PythonDotNet at python.org >>>> >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/pythondotnet/2014-March/001491.html | CC-MAIN-2020-10 | refinedweb | 372 | 79.26 |
Interactive plotting for Python.
hannesi would like to render some bigger matrix
If you find that you’re doing it a lot, you can also do
import toyplot.config toyplot.config.autoformat = “png”
to make it the default (at the top of your notebook, say).
hannesi often want to use data from dictionaries with toyplot, eg .keys() as x and .items() as y axes. unfortunately this leads to "TypeError: float() argument must be a string or a number, not 'dict_keys'" from because numpy's conversion to an array does not like python's dict views or iterators
hannesi can wrap it into a list() call but that seems ugly
hannesno idea why numpy does not support those but i wonder if adding some logic to toyplot's scalar_array function would make sense for this usecase?
hannes```
hannesExample of what I would love to use (in an explicite way):
import toyplot data = {1: 5, 2: 3, 3: 4} canvas = toyplot.Canvas() axes = canvas.cartesian() mark = axes.plot( data.keys(), data.values(), )
hannes uses a shared x axis for a stacked plot, is there a way to make a "true" scatterplot of irregular data?
hannesi have unique x and y axes for each series
hannesjust use the default until it does not do what you need
hannesposted an image:
hanneshm, when using matplotlib, one can get an image in there from a script itself by calling plt.show()
hannesi guess that is a super magic command
hannessad how bad the state of relaying between messaging services still is :(
Hey @eaton-lab ! The replacement for
_children is the
_scenegraph, which is maintained by the canvas and provides a more flexible, open-ended way to keep track of relationships between objects. I created a notebook to demonstrate:
You will see that there's more than one type of relationship, but "render" is equivalent to the old parent-child relationship. Note that you can access
_scenegraph from the canvas or the axes, but it's all the same object. I'd be interested to hear how you're using it?
Cheers,
Tim | https://gitter.im/sandialabs/toyplot?at=5d9887a0e8de6f3ca04347c0 | CC-MAIN-2022-05 | refinedweb | 347 | 60.45 |
#include <Pt/System/Queue.h>
This class implements a thread safe queue. More...
A queue is a container where the elements put into the queue are fetched in the same order (first-in-first-out, fifo). The class has a optional maximum size. If the size is set to 0 the queue has no limit. Otherwise putting a element to the queue may block until another thread fetches a element or icreases the limit.
This method returns the next element. If the queue is empty, the thread will be locked until a element is available.
This method adds a element to the queue. If the queue has reached his maximum size, the method blocks until there is space available.
Setting the maximum size of the queue may wake up another thread, if it is waiting for space to get available and the limit is increased. | http://pt-framework.net/htdocs/classPt_1_1System_1_1Queue.html | CC-MAIN-2018-34 | refinedweb | 145 | 74.29 |
Magento Extension Dev. - Update products from remote XML
This project was awarded to phpsolutionsuk for $650 USD.Get free quotes for a project like this
Project Budget$250 - $750 USD
Total Bids10
Project Description
I need someone who is brilliant with Magento. Someone who knows their blocks from their templates, their core from their community, their xml from...you get the picture.
This project is pretty simpe - a user clicks a button in the extension admin page, it queries a remote url which returns XML, and it parses and stores that XML according to the contents.
The XML will contain information to update the attributes for each product id.
e.g. <products><id>23</id><brand>Nike</brand></products> - update product id =23, set brand = Nike
If the attributes contained in the XML do not exist, they should be created.
e.g. <products><id>24</id><texture>rough</texture></products> - update product id =24, set texture = rough
The XML will contain a jobid and create date which needs to be tracked, so that jobs are not processed more than once e.g. <xml id="123" date="18-mar-2012"><products /></xml> - check jobid 123 has never been processed before.
The XML will also contain a message to be inserted into the adminnotification_inbox e.g.
<severity>2</severity>
<title>a message with the data</title>
<description>the message description</description>
<url>#</url>
The code must use the magento product models, not direct database updates.
One more thing, I need this integrating with my existing extension. My existing extension stores a username and password which should be used in the URL of the query to get the xml e.g. [url removed, login to view]
But will also need the namespace and name set accordingly.
I am looking for a quick turnaround on this project and am looking for hoping and praying to hear from someone who can show they are a Magento expert. This is likely to result in future work as we have a couple of vacancies in our php team.
I look forward to hearing from you expert Magento freelancers!
Thanks | https://www.freelancer.com/projects/Magento/Magento-Extension-Dev-Update-products/ | CC-MAIN-2017-17 | refinedweb | 352 | 61.97 |
about:
>
> [swapper 0] Illegal instruction at Address <somewhere>.
>
> The instruction that causes this exception is:
>
> > ...
> >reload_pgd_entries:
> >#endif /* CONF_DEBUG_TLB */
> >
> > /* Load missing pair of entries from the pgd and return. */
> > mfc0 k1,CP0_CONTEXT
> > nop
> > lw k0,(k1) #Never causes nested exception
> > ^^^^^^^^^^^^^^^
> > mfc0 k1,CP0_EPC # get the return PC
> > ...
>
> in arch/mips/kernel/r2300_misc.S.
Ok, I was suspecting something like that. That particular TLB exception
handler has to be pretty different from the R4000 version. It's
doing a very trivial piece of work. The only thing that makes it
a bit more complicated is the fact that the TLB exception handlers
have to be even more optimized than Microsoft's hype. In fact the
current handlers are all far to bulky.
I'm not shure which source tree you're using. Could you post the
TLB exception handler from that tree?
Ralf | http://www.linux-mips.org/archives/linux-mips-fnet/1997-09/msg00055.html | CC-MAIN-2015-32 | refinedweb | 141 | 67.96 |
Opened 3 years ago
Closed 3 years ago
#30024 closed New feature (fixed)
The test client request methods should raise an error when passed None as a data value
Description
Both GET and POST encoded data do not have a concept of
None or
NULL. The closest approximation is an empty string value or omitting the key. For example, in a GET request this could be either
/my-url/?my_field= or simply
/my-url/ but not
/my-url/?my_field=None)
When onboarding new developers to projects, this can cause confusion to those less familiar with these details. For example, a new developer may try the following:
def test_setting_value_to_none(self): self.client.post('/my-url/', {'my_field': None}) self.assertIsNone(...)
In current versions of Django, behind the scenes, this
None gets coerced to the string
'None' by the test client. The Django form field classes don't recognize the string
'None' as an empty value (good) and so this test doesn't pass. Where the new developer thought a field would be assigned
None they instead get a form error. Depending on the developers' knowledge of these details, this could take much debugging or consulting a colleague.
I think we can recognize this pattern as a programming mistake and raise an informative error to guide the developer. I propose something like the following, but am open to suggestions:
TypeError: Cannot encode None as POST data. Did you mean to pass an empty string or omit the value?
For GET requests, the query string data is processed by
django.utils.http.urlencode(). So perhaps this same check can be done there as encoding
None in a URL query string as
'None' is rarely the intended behavior. For those that really want the string
'None' in the query string, they can pass the string
'None'.
Change History (5)
comment:1 Changed 3 years ago by
comment:2 Changed 3 years ago by
I think this is fair. There's no case where
None is the right thing to use. (As Jon says, maybe
'None' if you really mean that.)
PR | https://code.djangoproject.com/ticket/30024 | CC-MAIN-2021-31 | refinedweb | 346 | 71.85 |
Server & Tools Blogs > Server & Management Blogs > Ask the Core Team Sign in Menu Skip to content All About Windows Server Windows Server Nano Server Windows Server Essentials Not knowing your topology, it's difficult to assess, but there may be a network problem. If a Local User account is used for remote32839 14:46:13 (0) ** accesses, it will be reduced to a plain user (filtered token), even if it is part of the Local more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed
When I run interactively the script works fine, it's only when I schedule it and run it under a service account that it fails with the access denied. Promoted by Experts Exchange More than 75% of all records are compromised because of the loss or theft of a privileged credential. It is embedded in Server 2008. This cmdlet has a convenient alias, gwmi, which I’ll use for most of my examples. internet
Now it's a matter of making a few changes and rebooting and seeing if it blocks it again. Do you have something simliar to this in the event log: "...The namespace is marked with RequiresEncryption but the client connection was attempted with an authentication level below Pkt_Privacy. Then I started to get errors (HRESULT 0x80041004). Moving Forward with WMI WMI continues to be developed for future versions of Windows, adding new classes and capabilities.
Finally, here are a couple more useful links: Authentication for Remote Connections Comparing RPC, WMI and WinRM for Remote Server Management with PowerShell 2 Posted by Adil Hindistan at 5:58 PM For this, you’ll want to use gwmi’s –computer parameter to connect to a remote computer. Running gwmi –namespace "root\cimv2" –list, for example, gets you a complete list of classes in that namespace. Wmi Access Is Denied. (exception From Hresult: 0x80070005 (e_accessdenied)) The objective is to prove that the supplied credential works, anyway to do that without going after WMIOject?
You Then provide the level accesses and entering the credentials. But of course, logging in the actual server, it comes up fine. I've got it working.: Powershell Access Is Denied Connecting To A Remote Computer get-AccountLogin : Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) + get-AccountLogin < <<< -computer $computer -userName $userName -password $password + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,get-AccountLoginOf Enable Distributed COM on this computer - option is checked.
It runs fine locally but not remotely. Network Access: Let everyone permissions apply to anonymous users - Set to Enabled e. Get Wmiobject Access Is Denied 0x80070005 Actually, Windows PowerShell converts any object to text in this way. Get-wmiobject Credential Password If I use the dns alias instead of actual hostname, Get-CimInstance too returned an error, but return code is different (0x80070035).
Denis Jeanveau 0 7 Sep 2012 7:30 PM In reply to richlec: Did that. navigate here What is a non-vulgar synonym for this swear word meaning "an enormous amount"? I asked the network guys and they said they hadn't made any kind of changes that would affect it. 0 LVL 16 Overall: Level 16 VB Script 5 Powershell 2 Key features such as syntax-coloring, tab completion, visual debugging, Unicode-compliance, and context-sensitive Help provide a rich scripting experience. Powershell Access Denied Set-executionpolicy
Where can I find Boeing 777 safety records? Navigate to Security\Local Policies\Security Options a. I might, for example, be logged on in a different, untrusted domain, or I could be logged on with a less-privileged account. Check This Out Tuesday, February 24, 2009 9:09 AM Reply | Quote 1 Sign in to vote have you marked a user as Trusted For Delegation?
The Service Account is local Admin on that box. Get-wmiobject : User Credentials Cannot Be Used For Local Connections a network firewall between the client and server machines. For example, a logical disk class might describe a device that has a serial number, a fixed storage capacity, an amount of available capacity, and so forth.
asked 7 years ago viewed 10315 times active 6 years ago Related 23Which permissions/rights does a user need to have WMI access on remote machines?1How to use Powershell 2 Get-WmiObject to An alternative option if you have a compelling reason not to upgrade to Windows Server 2008 R2, would be to download and install the latest Windows Management Framework as outline in Enable Distributed Com On This Computer Well, it proved to be more difficult to find information on this than I thought it would be.
Windows PowerShell retrieved all instances of the specified class and, since I hadn’t told it to do anything else with these instances, converted them into a textual representation. Does anyone know what that blue thing is? If WMI doesn’t have a class for something, it can’t manage that component. this contact form Yes I have.
As an administrator, I should have permission to work with this computer’s WMI service, but it’s likely that my local workstation credentials aren’t sufficient. Re-check your and remote system credentials to ensure that both are the same. However, I had access to another Windows 2008 R2 server with PowerShell3. Every version of Windows since Windows 2000 has come with WMI built in (later versions have expanded the number of available classes), meaning you have both the WMI client and the
Its SQL-like syntax makes it easier to retrieve specific instances—such as a specific service—rather than all instances of a given class. My research found the following... This DCOM fix vbs script was EXACTLY the solution, after a week of searching. I had thought something in the security settings or group policies did it, but then when I tried to use the script on another box (that did not have the new
Recent changes to WMI security can also cause this error to occur: Blank passwords, formerly permitted, are not allowed in Windows XP and Windows Server 2003. And, because Windows PowerShell is simply utilizing that existing architecture, it’s subject to that architecture’s security features. Any suggestions on how to prevent filtered token scenario?32835 14:46:13 (0) ** INFO: Local Account Filtering: ...................................................................................... Windows firewall is not installed but we do have McAfee (but looking at the logs, it's not blocking anything).
How does President Duterte's anti-drug campaign affect travelers in the Philippines? Start -> Control Panel -> Administrative Tools -> Local Security Policy 2. Browse down the tree to Console Root ' Component Services ' Computers ' My Computer 4. TEST01...
However you can change this behaviour using Delegate impersonation level. Menu Forums Articles Summit Calendar eBooks Videos Podcast BuildServer Swag Login You are here:Home Forums PowerShell Q&A WMI Access Denied WMI Access Denied This topic contains 0 replies, has 1 voice, Why would two species of predator with the same prey cooperate? | http://smartnewsolutions.com/access-is/get-wmiobject-access-is-denied-windows-2008.html | CC-MAIN-2017-26 | refinedweb | 1,173 | 51.58 |
pfm_get_event_attr_info − get event attribute information
#include <perfmon/pfmlib.h>
int pfm_get_event_attr_info(int idx, int attr, pfm_os_t os, pfm_event_attr_info_t *info);
This function returns in info information about the attribute designated by attr for the event specified in idx and the os layer in os..
The pfm_event_attr_info_t structure is defined as follows:
typedef struct {
const char *name;
const char *desc;
const char *equiv;
size_t
uint64_t code;
pfm_attr_t type;
int idx;
pfm_attr_ctrl_t ctrl;
int reserved1;
struct {
int is_dfl:1;
int is_precise:1;
int reserved:30;
};
union {
uint64_t dfl_val64;
const char *dfl_str;
int dfl_bool;
int dfl_int;
};
} pfm_event_attr_info_t;
The fields of this structure are defined as follows:
PFM_ATTR_UMASK
This is a unit mask, i.e., a sub-event. It is specified using its name. Depending on the event, it may be possible to specify multiple unit masks.
PFM_ATTR_MOD_BOOL
This is a boolean attribute. It has a value of 0, 1, y or n. The value is specified after the equal sign, e.g., foo=1. As a convenience, the equal sign and value may be omitted, in which case this is equivalent to =1.
PFM_ATTR_MOD_INTEGER
This is an integer attribute. It has a value which must be passed after the equal sign. The range of valid values depends on the attribute and is usually specified in its description.
is_precise
This field indicates whether or not this umask supports precise sampling. Precise sampling is a hardware mechanism that avoids instruction address skid when using interrupt-based sampling. On Intel X86 processors, this field indicates that the umask supports Precise Event-Based Sampling (PEBS).
dfl_val64, dfl_str, dfl_bool, dfl_int
This union contains the value of an attribute. For PFM_ATTR_UMASK, the is the unit mask code, for all other types this is the actual value of the attribute.
PFM_ATTR_CTRL_UNKNOWN
The source controlling the attribute is not known.
PFM_ATTR_CTRL_PMU
The attribute is controlled by the PMU hardware.
PFM_ATTR_CTRL_PERF_EVENT
The attribute is controlled by the perf_events kernel interface.
reserved
These fields must be set to zero.
If successful, the function returns PFM_SUCCESS and attribute information in info, otherwise it returns an error code.
PFMLIB_ERR_NOINIT
Library has not been initialized properly.
PFMLIB_ERR_INVAL
The idx or attr arguments are invalid or info is NULL or size is not zero.
PFM_ERR_NOTSUPP
The requested os layer has not been detected on the host system.
Stephane Eranian <eranian AT gmail DOT com> | http://man.m.sourcentral.org/f17/3+pfm_get_event_attr_info | CC-MAIN-2021-04 | refinedweb | 386 | 56.86 |
Hello,
I am trying to assign labels to feature layer using advance python labeling.
The label are fetching from feature layer related table using following python script logic, which is given at ESRI Tech Support How To: Label a related table
def FindLabel ([keyField], [FirstLabel]):
import arcpy
key1 = [keyField] # Key field in feature class
key2 = "ID" # Key field in related table
L = [FirstLabel] # Label field in feature class
L2 = "Label2" # Label field in related table
myDataTable = r"<path-to-related-table>" # Path to related table
cur = arcpy.da.SearchCursor(myDataTable, [key2, L2])
for row in cur:
if str(key1) == str(row[0]):
L = L + " " + str(row[1])
return L
I can see the labels in ArcMap or in the preview before I publish it as a web service, but cannot see them in ArcGIS Online. I suspect there might be a problem with the path I use in the label expression.
It looks something like this: r"Database Connections\TEST.sde\Related_Table". Am I missing something?
Also, I have created a brand new geodatabe, where I put a feature class and a related table I want to get the labels from. In this case my paths looks like that - r"C:\Test.gdb\Related_Table". Again, it works fine in ArcMap, but there are no labels after I have published it as a web service?
Please, help!
Thanks heaps in advance!
Dmitry,
I think you may be right on the data path. From what I recall, "\t" in python acts as a tab (not sure of "\T" would though, like you have in your path). Maybe you could try forward slashes instead of backslashes. | https://community.esri.com/thread/222147-labels-do-not-appear-in-arcgis-online | CC-MAIN-2019-13 | refinedweb | 273 | 60.14 |
Till yet, we have only been using the simple types as parameters to methods. However, it is both correct and common to pass objects to methods. For instance, look at the following short Java program :
Here is an example program, demonstrates, how to use object as method parameter in Java:
/* Java Program Example - Java Use Objects as Parameters * In Java, Objects may be passed to methods */ class Test { int a, b; Test(int i, int j) { a = i; b = j; } /* return true if o is equal to invoking object */ boolean equalTo(Test o) { if(o.a == a && o.b == b) { return true; } else { return false; } } } public class JavaProgram { public static void main(String args[]) { Test obj1 = new Test(100, 22); Test obj2 = new Test(100, 22); Test obj3 = new Test(-1, -1); System.out.println("obj1 == obj2 : " + obj1.equalTo(obj2)); System.out.println("obj1 == obj3 : " + obj1.equalTo(obj3)); } }
When the above Java program is compile and executed, it will produce the following output:
As you can see, the equalTo() method within Test compares two objects for equality and returns the result i.e., it compares the invoking objects with the one that it is passed. If they holds the same values, then the method returns true, otherwise false.
Notice that the parameter o in method named equalTo(), specifies Test as its type. Although Test is a class type created by the program, it is used in simply the same way as Java's built-in types.
One of the most common uses of the object parameters involves constructors. Often, you will want to construct a new object so that it is initially the same as some existing object. To perform this, you must define a constructor that takes an object of its class as a parameter. For example, here this version of the Box allows one object to initialize another :
/* Java Program Example - Java Use Objects as Parameters * In this program, Box allows one object to * initialize another */ class Box { double width; double height; double depth; /* notice this constructor, it takes an object of the type Box */ Box(Box ob) // pass object to the constructor { width = ob.width; height = ob.height; depth = ob.depth; } /* constructor used when all the dimensions specified */ Box(double wid, double hei, double dep) { width = wid; height = hei; depth = dep; } /* constructor used when no dimensions specified */ Box() { width = -1; // use -1 to indicate height = -1; // an uninitialized depth = -1; // box } /* constructor used when cube is created */ Box(double len) { width = height = depth = len; } /* compute and return the volume */ double volume() { return width * height * depth; } } class Overload { public static void main(String args[]) { /* create boxes using the various constructors */ Box mybox1 = new Box(100, 200, 150); Box mybox2 = new Box(); Box mycube = new Box(7); Box myclone = new Box(mybox1); // create a copy of mybox1 double vol; /* get the volume of the first box */ vol = mybox1.volume(); /* print the volume of the first box */ System.out.println("Volume of mybox1 is " + vol); /* get the volume of the second box */ vol = mybox2.volume(); /* print the volume of the second box */ System.out.println("Volume of mybox2 is " + vol); /* get the volume of the cube */ vol = mycube.volume(); /* print the volume of the cube */ System.out.println("Volume of cube is " + vol); /* get the volume of the clone */ vol = myclone.volume(); /* print the volume of the clone */ System.out.println("Volume of clone is " + vol); } }
When the above Java program is compile and executed, it will produce the following output:
As you will see, when you begin to create your own classes, giving many forms of the constructors is usually required to allow to be constructed in a convenient and efficient manner.
Java Programming Online Test
Tools
Calculator
Quick Links | https://codescracker.com/java/java-objects-as-parameters.htm | CC-MAIN-2017-39 | refinedweb | 621 | 59.13 |
January 30, 2016
An article about a mini-assignment from the edX course "Introduction to Computer Science and Programming Using Python".
The problem is as follows:
Given a character and a string of alphabetized characters, write a program that returns “True” if the character is present in the string and “False” if it is not.
The assignment gives you a few hints and specifications (naturally you cannot use Python’s 'in' statement).
My first attempt failed horribly. It started well, I could solve the test case (find 'a' in 'abc'), but I did not sufficiently understand what the program was doing. If something didn’t work, I’d quickly determine a possible cause and apply a patch. Eventually the code turned into spaghetti; it looked bad and could not handle the inputs it should for unknown reasons.
Still, I learned more about the problem and what I had to pay attention to. I realized that there were three base cases, not one. I also realized that I did not know how to slice strings.
Attempt #2 was more serious. I broke out pencil and paper and methodically solved two cases: find 'k' in 'hkqruv' and find 'u' in 'hkqruv'. Why these two? The first requires bisecting left, while the second requires bisecting right.
I have the string 'hkqruv' (aStr) and want to find 'k'. Since the string is alphabetized, bisection search is possible. We can take the length of the string (6) and divide it by 2 to get the index position of our first guess.
aStrLength = len(aStr) guessIndex = aStrLength // 2 guessChar = aStr[guessIndex]
We can test if this character equals the character we are searching for (k) and whether it is higher or lower in the alphabet than 'k'. Based on the result of our tests, the program will either return True or carry out a left or right bisection.
In the above graphic, we see our guess character in bold surrounded by two intervals/bisections. Since we are looking for 'k' and know that r > k, we should look in the left bisection.
# search the left bisection elif guessStringValue > char: section = aStr[0:guessIndexValue] return isIn(char, section)
And the above code does exactly that. If our guessCharacter is greater than the character we are searching for, we slice up the input string ('hkqruv') so that it matches the interval we want to search ('hkq') and then call the function again.
The second case is very similar:
# search the right bisection elif guessStringValue < char: section = aStr[guessIndexValue+1:] return isIn(char, section)
The whole process repeats itself until reaching one of three base cases:
In the future, I should ensure I understand how to solve the problem computationally prior to implementing it and to test incrementally. There were a few times where I moved forward on bad assumptions. For instance, I thought:
inputString[guessIndexValue:]
Would output "bc", when it outputted "abc". What I actually wanted was:
inputString[guessIndexValue+1:]
If I had taken a minute to think about it or confirm how slicing works, much frustration would have been averted.
def isIn(char, aStr): ''' char: a single character aStr: an alphabetized string returns: True if char is in aStr; False otherwise ''' aStrLength = len(aStr) # returns False when char not found or empty string is entered if aStr == "" or aStrLength == 0: return False else: guessIndexValue = aStrLength // 2 guessStringValue = aStr[guessIndexValue] if guessStringValue == char: return True # search the left bisection elif guessStringValue > char: section = aStr[0:guessIndexValue] return isIn(char, section) # search the right bisection elif guessStringValue < char: section = aStr[guessIndexValue+1:] return isIn(char, section)
isIn("k", "hkqruv")
True
isIn("z", "hkqruv")
False | https://nbviewer.jupyter.org/github/bryik/jupyter-notebooks/blob/master/computer%20science/Recursive%20Bisection%20Search.ipynb | CC-MAIN-2020-40 | refinedweb | 605 | 59.03 |
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.7) Gecko/20050414 Firefox/1.0.3 Build Identifier: version 1.0.2 (20050317) This could either be a problem with the PADL software nss_ldap plugin for name service switch, or it could be a problem with something in thunderbird doing user name lookups, or some series of bad interactions between a likely bug in nss_ldap and something that thunderbird is doing. If you come across this bug with the same problem and just want to get things working, I found that running 'nscd' causes the problem to go away, probably because mozilla is no longer directly going into OpenLDAP or nss_ldap libraries. Anyhow, with a pristine, fresh untar/install of thunderbird 1.0.2 (and earlier versions as well), I get a Segfault on startup. It only happens when launching from a User whose identity is defined on the LDAP service instead of local files. I can start thunerbird just fine as root or other local users. If I try to use my own user ID, which comes from the LDAP server, thunderbird segfaults on startup. Reproducible: Always Steps to Reproduce: 1. Set up or get use of an LDAP based user account environment, including a testing box set up to use it. 2. Set up the OpenLDAP client on the client machine. 3. Set up pam_ldap and nss_ldap to allow logins by LDAP users. 3.1 DO NOT RUN 'nscd' (nscd appears to work-around this bug) 4. Do a pristine default untar install of thunderbird. 5. Try to run thunderbird as an LDAP-based user. Won't work. 6. Try to run as root. Should work if root isn't LDAP based. 7. Try to run as temporary local-only user not in LDAP. Should work. Actual Results: With LDAP based users, I got the segfault on startup. For non-LDAP users, no segfault, thunderbird worked fine. When running 'nscd', thunderbird worked fine in all cases. Expected Results: Should have started thunderbird in all cases without segfaulting. Thunerbird Version: 1.0.2 (20050317) OpenLDAP Version: 2.1.30 (Gentoo port 2.1.30-r4) Linux Distro: Gentoo 2005.0 Kernel Version: 2.6.11 (2.6.11-gentoo-r6) Kernel Patches: Evms 2.5.2 recommended patches UML SKAS patchset Libc Version: 2.3.4 + NPTL nss_ldap version: 2.2.6 pam_ldap version: 1.7.1
Here's the output from one of the segfaults: jmarco[~] $ /usr/thunderbird/thunderbird /usr/thunderbird/run-mozilla.sh: line 451: 2193 Segmentation fault "$prog" ${1+"$@"}
Created attachment 181988 [details] Attached an 'strace -f' of the failure. I did an 'strace -f /usr/thunderbird/thunderbird' and attached the results.
Created attachment 181996 [details] Output from a simple gdb session on the core dump Enabled coredumps and did a gdb on the resulting corefile. I could do more if you'd like.
Same problem happens to me. This time "nscd" doesn't help to pass away the problem.
Created attachment 210379 [details] New stack trace that points to: libldap50 conflict. Here's the related bug for PADL nss_ldap: I did more investigation on the problem at their request and found that there seems to be a conflict between the libldap50.so included with the binary version of Mozilla, and whatever the default libldapxxx.so that is installed on the user's distribution. This occurs with nss_ldap because it causes libc to drag in the default ldap library via NSS for anything that does user name translation. This seems to react poorl with the Mozilla libldap50 library. Included is the text snippit from my most recent comment on the PADL bug, and the stack trace from that bug. From PADL Bug: Problem is not in nss_ldap. It's a Thunderbird bug I brought up thunderbird under gdb on my desktop with LDAP user env and not nscd. Thunderbird died a messy death in strtok() with corrupted stack, so no trace. No problem. Turns out this is luckily the first call to strtok(), so was able to 'break strtok' and get a trace. It turns out that thunderbird has its own version of libldap.so called libldap50.so with a version of ldap_str2chararray that conflicts with that in /usr/lib/libldap-2.2.so.7. The version in Thunderbird's libldap50 is called unexpectedly and it looks like this is causing libldap-2.2 to poop its pants. As an experiment, I moved /usr/thunderbird/libldap50.so aside and symlinked it to the linux /usr/lib/libldap-2.2.so.7, and sure enough Thunderbird worked perfectly. Therefore, this is a problem with the binary release of Thunderbird not handling conflicts in system LDAP libraries. Nothing wrong with nss_ldap from the looks of it.
If you read my previous comment, you'll see that a possible workaround for this bug right now is to go into /usr/thunderbird and: mv libldap50.so moved-libldap50.so ln -s /usr/lib/libldap-2.2.so.7 libldap50.so Of course, change the locations of Thunderbird and your currently installed /usr/lib/libldapXXX as required for your distro/version. Then, restart Thunderbird.
the other implementations of this function do a dupe before using strtok. reporter, if someone here posted a patch, could you test it? i have absolutely no interest in setting up your configuration, something which has no use for me, but i might be willing to post patches for you to test and provide feedback. note: i'm not a mozilla ldap dev, i'm just someone who flags crash bugs.
Changing one function in the Mozilla libldap will probably not solve the entire problem here. Why not? Because there are undoubtedly dozens of small differences in behavior between the OpenLDAP libldap and the Mozilla libldap. I am not yet sure how to solve this problem in a way that is bulletproof. I added Rich Megginson to the CC in case he has any ideas/experience in dealing with this kind of conflict on Linux.
Yes, there are many large and small and incompatible differences between the OpenLDAP API and the Mozilla API. We had the same problem with newer binary versions of Apache on linux because they are linked directly with OpenLDAP, and we have some modules that depend on the Mozilla API. We solved that problem by using LD_PRELOAD to make sure the Mozilla API is loaded first. However, in this case, you may need to do the reverse and do a LD_PRELOAD to make sure the OpenLDAP API is loaded first. While that might solve the first problem, it will probably break other LDAP features of thunderbird like type down addressing, etc. So I'm not really sure how you can force PAM/NSS to use exclusively OpenLDAP calls while forcing the rest of Thunderbird to use exclusively Mozilla calls. What we really need is a unified API between OpenLDAP and Mozilla.. 2) Each API has extensions lacking in the other. 3) The command line tools are incompatible. 4) No one in either of the communities has either the time or the inclination to do the work.
I would be willing to test an updated libldap50 library if supplied as a binary, but I don't have the spare time to build from source. It's been a while since I've looked at this kind of stuff. From the glibc source code, it appears the the NSS code opens its database modules using dlopen(libnames[x], RTLD_LAZY). The problem is that Thunderbird is compile-time linked with libldap50.so, and so brings in its own version of any number of identically named but incompatible functions. By the time NSS does its dlopen() it's too late. Some of its internal function calls are going to resolve to already-bound functions from libldap50 and blow up. One way to work around this issue would be to implement a thin LDAP glue library that only contains functions called by Thunderbird. The glue library would internally dlopen("libldap50.so", RTLD_LAZY|RTLD_LOCAL) so as to not globally export loaded symbols for binding by other libraries. The glue versions of the API calls would dlsym() for the real versions and pass through. My workaround of replacing libldap50.so with OpenLDAP "works" for me, since I don't use any of the LDAP related stuff in Thunderbird. It just keeps getpwuid() type lookups from blowing up. I'd not be surprised to find that some of the LDAP related functionality is actually broken.
Confirmed bug on my setup - Changing Shared lib to OpenLDAP does resolve issue with startup, but does kill addressbook ldap usage.
*** Bug 333571 has been marked as a duplicate of this bug. ***
The same problem exists for thunderbird 2. The workaround to create a symlink to the local libldap-2.2.so.7 still fixes the issue.
What do Mozilla LDAP people think about using the same approach as is done for cairo:
(In reply to comment #10) > What we really need is a unified API between OpenLDAP and Mozilla. Yes. More to the point, we need a *good* LDAP API. Interested developers are invited to add comments here >. That probably makes sense from a Mozilla perspective, but I'm not sure it's worth the overhead of carrying NSPR around everywhere. Also some interesting commentary here: > 2) Each API has extensions lacking in the other. Not relevant, since Mozilla's use of LDAP is quite plain-jane. > 3) The command line tools are incompatible. I don't see how associated tools are relevant to the Thunderbird/Mozilla apps.. > 4) No one in either of the communities has either the time or the inclination > to do the work. Well, out of boredom, I spent 2 hours this afternoon patching my Mozilla build tree to use OpenLDAP. I think the difficulties have been overstated, because it's working fine on my OpenSUSE laptop. Note that I haven't looked at the necessary autoconf changes, just edited my build tree after configure was already run. As such, edit config/autoconf.mk: #LDAP_CFLAGS = -I${DIST}/public/ldap #LDAP_LIBS = -L${DIST}/bin -L${DIST}/lib -lldap60 -lprldap60 -lldif60 LDAP_CFLAGS = -I/usr/local/include -DLDAP_DEPRECATED LDAP_LIBS= -L/usr/local/lib -lldap_r -llber and use the attached patch. A more thorough adaptation would go through and eliminate the use of LDAPv2/deprecated APIs but this was quick and dirty...
Created attachment 333135 [details] [diff] [review] Quick'n'dirty patch Works with all ldap URLs that OpenLDAP supports (cldap, ldap, ldapi, ldaps); someone should add an option for choosing StartTLS...
Oh, you also need to turn off the MOZ_PSM stuff in directory/xpcom/base/src/Makefile: #ifdef MOZ_PSM #DEFINES += -DMOZ_PSM #CPPSRCS += \ # nsLDAPSecurityGlue.cpp \ # $(NULL) #endif This leaves you with a Mozilla build that uses OpenLDAP's SSL support, whatever it may be linked to (OpenSSL or GnuTLS, currently). It's worth noting that OpenSSL is already loaded in the process under Linux, due to various other system libraries included in the build, so this isn't really making any situation worse. Since OpenSSL has been a standard system library on Linux for so long and pretty much everything uses it, it would make more sense to replace NSS with OpenSSL here.
It should be noted that NSS is being considered for inclusion in the LSB, and OpenSSL is not, due in part to commitment to ABI compatibility in NSS.
Created attachment 333197 [details] [diff] [review] Cleaned up patch This patch is properly ifdef'd so it won't break the existing MozLDAP functionality...
Created attachment 333905 [details] [diff] [review] OpenLDAP+PSM support This patch also supports PSM with OpenLDAP, using new callback hooks that were just added to OpenLDAP's CVS HEAD. (Those hooks probably will be released in OpenLDAP 2.4.12; 2.4.11 is current.)
The PSM support just mimics the existing MozLDAP behavior. It's worth noting that the existing behavior will typically break when chasing referrals: The hostname that's passed in persists until the LDAP* handle is closed and is used for all Connection attempts. If a referral is received which points to ldaps:// on a different host, the hostname will not match and the connection should fail. If the referral points to the same host (as is common on MSAD) then it will probably succeed. To fix this problem the Connect callback should record a bit more info, to answer two questions: 1) whether it successfully connected once before - that will allow distinguishing referral chasing from the first successful connection. 2) whether the IP address of the current connection attempt matches the previous successful attempt - that will distinguish referrals to the same host from referrals to a different host. Then when it's determined that this connect attempt is chasing a secure referral on a different server, it can just use the name provided in the callback argument list.
This whole referral issue probably belongs in a separate bug report, but I'm commenting here because the details only surfaced while investigating this report. Another obvious problem with the current PSM support: if the initial connection is plaintext but a referral to an ldaps:// URL is received and chased, the subsequent connection will not have the PSM layer installed. The fix for this is to always install the callback, and just have it pass-thru without pushing the PSM layer if the current connection didn't request ldaps://.
Created attachment 334053 [details] [diff] [review] Fix referral issues Also noticed, in the current code there's a potential memory leak in nsLDAPSSLInstall if prldap_set_sessioninfo fails; it will leak the dup'd hostname because it calls the wrong free function before returning. (nsLDAPSecurityGlue.cpp:369 should be calling nsLDAPSSLFreeSessionClosure()...) The socketClosure stuff doesn't seem to accomplish anything. It should probably be ripped out; there's no special handling needed for closure of individual sockets. It's only needed for closing the session handle. The attached patch fixes these two issues in the existing code. It also fixes the referral issues I mentioned before, for both MozLDAP and OpenLDAP.
(Mark said "be my guest" ...)
(In reply to comment #15) > What do Mozilla LDAP people think about using the same approach as is done for > cairo: > > > It seems this would make the app more dependent on having these specific libraries bundled with the app. It would be nice to be able to use the library already present on a system, instead. An alternative approach, along similar lines, would be to avoid direct references to these library functions in any particular code. Instead, use dlopen (or its analogue) to find any suitable version of the desired library, and use dlsym to build up a table of function pointers for all of the needed entry points. Then wrap macros around all of the invocations in the main source, to always invoke these functions through your table of pointers. On a separate note, in my current patches I left nsLDAPService::CreateFilter unimplemented because a quick grep thru the source tree didn't turn up anyone using this function. But now I see that the AddressBook actually does try to use it for autocomplete, so I guess we'll have to provide an OpenLDAP version of ldap_create_filter() before this patch can be considered complete.
NSPR provides a analogue of dlopen that works on all Mozilla/Firefox/TBird platforms and is present in every FF browser and TB mail client (SM too). See documentation here
Another approach that sometimes works is to link these libraries with -Bsymbolic, to restrict them to resolving their symbol references to within their own shared objects. Unfortunately, it also requires whoever built the conflicting library to use the same option. I.e., it's not sufficient to link Mozilla's libldap with this flag; the platform's libldap must be linked this way as well. (The symbol conflict confusion is bi-directional; only linking one of the conflicting libraries only eliminates the conflict in one direction.) It also doesn't help when the shared library has other external dependencies (e.g. OpenLDAP's libldap depends on liblber). Had to mention this because the dlopen approach is still vulnerable to the problem of the dlopen'd libldap referencing the wrong liblber if another one was implicitly loaded into the process by some other library dependency.
Created attachment 334117 [details] [diff] [review] Add ldap_create_filter I note that in Mozilla's libldap/getfilter.c, which provides ldap_create_filter(), the header comment says "getfilter.c -- optional add-on to libldap". It's not a part of the libldap API spec, and it's totally self-contained - it has no dependencies on anything else in libldap. IMO it doesn't really belong in there, someone just tossed it in there for lack of a more obvious place. So for this patch, I've copied the necessary bits out of getfilter.c and pasted them in here where they're actually used.
Just for your information: - The bug still exist in OpenSuse 11.0 x86_64 kernel 2.6.25.18-0.2-default MozillaThunderbird-2.0.0.17-3.1 nscd-2.8-14.1 nscd crashes as soon as thunderbird is launched # ps -ef |grep nscd root 4905 1 0 09:56 ? 00:00:00 /usr/sbin/nscd root 4915 4844 0 09:56 pts/2 00:00:00 grep nscd # logout begou@thor: thunderbird Registering Enigmail account manager extension. Enigmail account manager extension registered. /usr/bin/thunderbird: line 134: 4918 Erreur de segmentation $MOZ_PROGRAM $@ begou@thor: ps -ef |grep nscd begou 4927 4818 0 09:57 pts/2 00:00:00 grep nscd Using /usr/lib64/libldap-2.4.so.2 instead of /usr/lib64/thunderbird/libldap50.so seems to provide a good work-around.
Given that we have a patch, maybe this should block Thunderbird 3. It would be really nice to have an idea of how prevalent this is...
Though, despite having a patch, it seems like there's still some discussion to be had on whether it uses the optimal approach, or if one of the other approaches suggested here would make more sense.
Whatever the effect of this specific patch is, I'd like to voice my opinion that, unless Thunderbird gains useful LDAP support for reading and writing address books, there is no way to place Thunderbird onto the corporate desktop, although there are other limiting factors around as well.
Removing the flag that I mistakenly set: since this isn't part of Gecko, so it can't block a gecko release. I'd love to get this for Thunderbird 3, but it feels like there's still a non-trivial amount of work to do here. Not adding [tb3needs], because if I this were the last bug standing, I don't think we would hold the release for it. Sorry I haven't been able to get back this yet, Howard. :-(
I was just bitten by it on Tbird 3 (Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1pre) Gecko/20090607 Shredder/3.0b3pre). Oddly, it hangs early in startup when connecting over remote X (ssh-tunnel) but not when invoked locally.
Given that we have a patch, should really try to drive this in for tb3. I'm not sure it's a problem that the code is in m-c, if it's all NPOTB for firefox anyway.
NPOTB? Let me guess: Not part of the browser ?
Not part of the build.
Comment on attachment 334117 [details] [diff] [review] Add ldap_create_filter is this patch still wanted/needed?
Unless something has changed, I suspect it's still necessary. Comment 37 still applies, though.
I heard there are distributions which patched glibc's name service switch components to avoid these crashes. One comment about that is I don't know more details though.
(In reply to comment #44) > (From update of attachment 334117 [details] [diff] [review]) > is this patch still wanted/needed? Independent of the OpenLDAP functionality, the bugs / memory leaks in the current code are still issues.
Here's a workaround: Add yourself as a local user in passwd & shadow as well as ldap.
@Bruce Edge: it is sufficient to have the users in passwd - adding them to shadow is not needed.
Besides any workaround (adding to local passwd an run nscd), it works just to run TB, but not for lightning. I can run TB+Lightning with a local only user, but not with a LDAP user, on the same machine with the same pam_ldap config.
Forget my comments. My ldap user has its homedir mounted on a NFS volume mounted with 'noexec' flag. Removing this makes it work.
Wayne Mery (vn) <vseerror@Lehigh.EDU wrote>: > I wrote: > > unless Thunderbird gains useful LDAP support for reading and writing > > address books, there is no way to place Thunderbird onto the corporate > > desktop, although there are other limiting factors around as well. > a_geek, are you in a corporate environment? First off, please quote properly, and keep it here. To answer the question: A large part of my work is as a consultant to corporations with typically several hundred users, and my mind set is tweaked towards the requirements of such organisations. But I have a hard time seeing TB even in the SME area, as they also at least want shared address books and calendaring throughout the company, and will not accept an out-of-band management requirement for their address books (this is a large part of what LDAP access is about).
I.
Hi, I run a corporate network with aprox. 90 users in 3 different sites and roamaing users. Our only mail client is TB with lightning, using IMAP mailboxes and SOGo as calendar server. I am interested in any progress/enhancement in any of those two. I can help testing.
Hi, today I have realize that this bug is affecting us also. We have just decide to move to TB from Evolution. If nscd is not installed on the system TB even does not start, it gave SEG FAULT an crashes. If nscd is present TB starts but you can't configure accounts or access some menus such as addons menu. Ive tried to add users in /etc/password but no way it still crashes. Is it there any workaround for this?? Ubuntu: 10.10 TB: 3.1.7 nscd 2.12.1-0ubuntu10.2 libldap 2.4-2 libnss-ldap 264-2ubuntu2 The last lines of strace are this read(53, "#\n# LDAP Defaults\n#\n\n# See ldap."..., 4096) = 198 read(53, "", 4096) = 0 close(53) = 0 munmap(0xb7704000, 4096) = 0 geteuid32() = 25004 getuid32() = 25004 open("/home/user/ldaprc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/home/user/.ldaprc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("ldaprc", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=712, ...}) = 0 geteuid32() = 25004 --- SIGSEGV (Segmentation fault) @ 0 (0) --- unlink("/home/user/.thunderbird/3nkktvbg.default/lock") = 0 rt_sigaction(SIGSEGV, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigprocmask(SIG_UNBLOCK, [SEGV], NULL, 8) = 0 tgkill(18919, 18943, SIGSEGV) = 0 --- SIGSEGV (Segmentation fault) @ 0 (0) ---
You really need a stack trace for the crash to make progress, I think.
(In reply to comment #58) > You really need a stack trace for the crash to make progress, I think. Oh, sorry, I see Howard has a patch in progress.
Comment on attachment 334117 [details] [diff] [review] Add ldap_create_filter switching review to standard8
Mike, since this might be a serious issue on Ubuntu, you might want to look into driving this patch forward, if it's still applicable.
(In reply to comment #57) > Is it there any workaround for this?? Carlos, reportedly replacing libnss-ldap by libnss-ldapd is a workaround for this. See also the Ubuntu report on this bug,
btw. as far as I have tested - Mandriva 2010.2 is not affected anymore (thunderbird 3.1.7)
Bug in Mandriva 2010.2 (Thunderbird 3.1.7) seems fixed - maybe an internal patch - or bug was based on a library which was replaced.
Could not reproduce in Debian Squeeze using Thunderbird 3.1.7.
This also works fine on Ubuntu 8.04 and TB 3.1.7
Ro added the following comment to Launchpad bug report 507089: As this bug was not present in Ubuntu 9.10 Karmic Koala (thunderbird 2.0.0.24+build1+nobinonly-0ubuntu0.9.10.3), it must have been introduced sometime in between karmic and lucid. --
I've reproduced this bug in both Ubuntu Maverick and Natty, using Thunderbird 3.1.7. I'll dig a bit deeper, and keep you all posted.
Comment on attachment 334117 [details] [diff] [review] Add ldap_create_filter I'm not convinced by this solution. If I understand it correctly, then this is trying to make our API the same as OpenLDAP's version. So depending on the set-up of the (Linux) system, we could be using either the OpenLDAP library, or our own. We don't know what is in OpenLDAP's library, nor have will we have done extensive testing in it. If we get crashes or strange results, we may not even realise that we're using OpenLDAP's library. This would make support very difficult. I think this is what Mark was saying in comment 9. Given that we ship this library in Thunderbird, intending that Thunderbird is going to use this library, then maybe we should consider re-naming the library when we ship it within Thunderbird. This idea is from a similar approach Firefox took with SQLite in bug 513747. So for instance, we could ship libmozldap60.so etc where we build LDAP as part of Thunderbird. Hence, changing the name should resolve the conflicts we're seeing, and ensure that Thunderbird runs with what we intended. The LDAP c-sdk could still default to libldap60.so, and if building with the system LDAP c-sdk, then we could still use libldap60.so. If Linux distributions want to use the system LDAP for shipping Thunderbird, then I would expect them to verify/handle bugs with LDAP, especially if it isn't the LDAP c-sdk that we're shipping with Thunderbird. Obviously we may still want to move the two sets of LDAP APIs closer together, but I'm not convinced doing it as a result of this bug is the right thing to do. For example, it really does feel like ldap_create_filter should be in the c-sdk, and therefore maybe it needs adding to OpenLDAP's version, not removing from ours. If I've misunderstood things, then please correct me.?
(In reply to comment #70) >? You're right that Thunderbird or some other app *shouldn't* ever need to care about this, but the fact is that the old nss-ldap design causes these types of problems, and libnss-ldapd corrects the design flaw.
airtonix added the following comment to Launchpad bug report 507089: still a problem in 12.04 amd64 desktop and the default thunderbird provided. --
Created attachment 628378 [details] [diff] [review] rename libldap60.so to libmozldap60.so
While that change makes sense in general I'm wondering what it's supposed to fix? I'm pretty sure that the filename is not the issue.
Comment on attachment 628378 [details] [diff] [review] rename libldap60.so to libmozldap60.so From discussions about this sort of thing previously (which admittedly were a while ago), I believe that changing the library name wouldn't actually resolve all the problems. Additionally, I don't think it is really right to change the library name unless the developers of the Mozilla LDAP c-sdk really want to, as it would impact on all the users of it, and potentially the use of libraries on existing systems. I think that we should really go for changing the LDAP c-sdk that we use, and possibly replacing it with OpenLDAP as Howard was intending (or something else). To this effect I've put a proposal to tb-planning about this change: ()
Comment on attachment 334117 [details] [diff] [review] Add ldap_create_filter I'm rescinding my previous feedback- on this. Per previous comment on this bug, discussions have moved on, and we're considering moving away from the LDAP c-sdk, so this patch may therefore be heading in the right direction. Obviously, it would need to be updated and re-tested etc, but see the tb-planning discussion first.
The problem ist also in Thunderbird 15 still present! I get a backtrace like in: (gdb) bt #0 strtok_r () at ../sysdeps/x86_64/strtok.S:190 #1 0x00007ffff6ad3b3a in ldap_str2charray (str=0x7fffe3781ced "ldap://localhost/", brkstr=0x7fffe3781a4b ", ") at /usr/src/debug/mail-client/thunderbird-15.0.1/comm-release/ldap/sdks/c-sdk/ldap/libraries/libldap/charray.c:218 #2 0x00007fffe376c216 in ldap_url_parselist_int (ludlist=0x7fffe398be80, url=<optimized out>, sep=<optimized out>, flags=11) at url.c:1293 #3 0x00007fffe376da8b in ldap_int_initialize_global_options (gopts=0x7fffe398bdc0, dbglvl=<optimized out>) at init.c:537 #4 0x00007fffe376dc0d in ldap_int_initialize (gopts=0x7fffe398bdc0, dbglvl=<optimized out>) at init.c:653 #5 0x00007fffe3753309 in ldap_create (ldp=0x7fffffff9cb8) at open.c:108 By looking at (gdb) info sharedlibrary 0x00007ffff6ad2040 0x00007ffff6af6558 Yes /usr/lib64/thunderbird/libldap60.so 0x00007fffe3752fd0 0x00007fffe377e0a8 Yes /usr/lib64/libldap-2.4.so.2 you can see that the openldap routine is jumping into a mozilla routine, causing a segfault by applying strtok to "ldap://localhost/", which is a built in string in the openldap lib. A solution would be nice, because currently I can't use Thunderbird at all.
The problem is also in Thunderbird 16. It's a clash of symbols from libldap-2.4.so and libldap60.so. (gdb) bt #0 0x00007fffe708a100 in ldap_str2charray () from /usr/lib64/libldap-2.4.so.2 #1 0x00007fffe70816c6 in ldap_url_parselist_int () from /usr/lib64/libldap-2.4.so.2 #2 0x00007fffe7082f1b in ldap_int_initialize_global_options () from /usr/lib64/libldap-2.4.so.2 #3 0x00007fffe7083016 in ldap_int_initialize () from /usr/lib64/libldap-2.4.so.2 #4 0x00007fffe706a6ab in ldap_create () from /usr/lib64/libldap-2.4.so.2 #5 0x00007fffe706aa81 in ldap_initialize () from /usr/lib64/libldap-2.4.so.2 #6 0x00007fffe72a79c0 in do_init () from /lib64/libnss_ldap.so.2 #7 0x00007fffe72a9d1c in _nss_ldap_search_s () from /lib64/libnss_ldap.so.2 #8 0x00007fffe72ab580 in _nss_ldap_getbyname () from /lib64/libnss_ldap.so.2 #9 0x00007fffe72abd07 in _nss_ldap_getpwnam_r () from /lib64/libnss_ldap.so.2 #10 0x00007ffff70c5685 in getpwnam_r () from /lib64/libc.so.6 Removing/renaming libldap60.so caused some errors in finding the library, so this seems no solution: XPCOMGlueLoad error for file /usr/lib64/thunderbird/libxpcom.so: libxul.so: cannot open shared object file: No such file or directory Couldn't load XPCOM. We brute-forced renaming the symbol via sed -e 's:ldap_str2charray:ldap_str2xharray:' /usr/lib64/thunderbird/libldap60.so in order to make it work.
Problem is exist on Thunderbird 17 too, here we can find crash reports relevant to this issue from all versions of thunderbird:?
Howard is no longer working on this (In reply to Murz from comment #83) >? seems likely. arena_dalloc | ldap_x_free | ldap_set_lderrno arena_dalloc | ldap_ld_free | libnss_ldap-2.13.so@0x3955 arena_dalloc | ldap_set_lderrno arena_dalloc | ld-2.15.so@0x214e4 arena_dalloc | ld-2.15.so@0xe774
The bug is present in Thunderbird 24.2.0 running on Kubuntu 12.04.4. Running nscd appears to work around the issue, but I haven't tested it thoroughly for side effects. I find it somewhat ironic that a nearly nine year old bug of this magnitude has status: NEW. Software versions (all from Ubuntu repos): $ aptitude show thunderbird | grep Version Version: 1:24.2.0+build1-0ubuntu0.12.04.1 $ aptitude show libldap-2.4-2 | grep Version Version: 2.4.28-1.1ubuntu4.4 $ uname -a Linux tiny 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
(In reply to Maciej Puzio from comment #85) > I find it somewhat ironic that a nearly nine year old bug of this magnitude > has status: NEW. Actually, a better label would be CONFIRMED rather than NEW. That's what NEW really means, it does not refer to the bug's age.
(In reply to Tony Mechelynck [:tonymec] from comment #86) > Actually, a better label would be CONFIRMED rather than NEW. That's what NEW > really means, it does not refer to the bug's age. I am very well aware of that; my point was to draw attention to an unacceptable quality control, record-breaking in the length of bug fix cycle. Anyway, my further testing revealed several more issues with libldap, libpam-ldap and libnss-ldap, and I decided that this software as a whole does not meet my quality requirements. Instead I am deploying sssd as LDAP client for PAM and NSS, and this is my recommendation for readers of this page.
nslcd /nss-pam-ldapd would be the best choice, the code is quite mature since the basic LDAP functionality is ported from the old PADL code and well proven. It's also quite compact, it does just LDAP and nothing else. SSSD is unproven, and quite overloaded featurewise. For security/authentication software, complexity is the enemy of reliability. I shouldn't have to roll out that lecture again...
Maciej Puzio and Howard Chu - thanks for the info, moving to ldapd or sssd solves this problem.?
(In reply to Martin Baute from comment #90) >? It is a constant of Electronic Data Processing that no program is bug-free before it is obsolete. Even once a bug is identified, fixing it is not always easy. Complaining that "after so many years, no fix has been found" doesn't push the bug any nearer to be fixed, while it adds to the lot of useless rubbish (please excuse my language) that developers must wade through in order to find what the problem really is. Another constant of EDP is that there are never enough coding hands do do all that needs doing, even when, as at Mozilla, a lot of volunteers selflessly donate part of their time to help the people whose paid job it is to try and fix these bugs. Any help is always welcome, and the code is anyone's to look into. Do you know how to fix the bug? Good! Write a patch, ASSIGN the bug to yourself, find an appropriate reviewer by browsing and off you go. Once you get a positive review, set the checkin-needed flag, and someone will push your patch into the permanent source. You mean you don't know how to fix the patch? Ah, too bad. Neither do I. So let us wait patiently, even years if that's what it takes, until someone comes around who does, and in the meantime let's have a look at the "rules of the house",
(In reply to Tony Mechelynck [:tonymec] from comment #91) > ...lots of the usual deleted... So your answer to a bug that's been confirmed, and after nine years still expresses itself as SIGSEGV, is basically, "go fix it yourself"? You think *that* is a useful contribution to this bug report? Sometimes I'm really ashamed of my peers in the trade. And no, I won't wade through Thunderbird sources, because I've got other projects. I am a Thunderbird *user*, not a *maintainer*, so... ...go fix it yourself.
Confirming this bug for 31.1.1 Linux (Xubuntu 14.04): User accounts through ldap authentication make Thunderbird crash when trying to print. Installing nscd makes that go away.
Confirming bug for current release (45.4.0) for Ubuntu 16.04 (64bit) Thunderbird crashes for ldap accounts when: 1. creating new TB user profile 2. invoking print dialog in TB Workarround: sudo apt install nscd | https://bugzilla.mozilla.org/show_bug.cgi?id=292127 | CC-MAIN-2017-26 | refinedweb | 5,949 | 65.32 |
Back to index
#include <signal.h>
#include <time.h>
#include <unistd.h>
#include <errno.h>
Go to the source code of this file.
Definition at line 41 of file sleep.c.
{ unsigned int remaining, slept; time_t before, after; sigset_t set, oset; struct sigaction act, oact; int save = errno; if (seconds == 0) return 0; /* Block SIGALRM signals while frobbing the handler. */ if (sigemptyset (&set) < 0 || sigaddset (&set, SIGALRM) < 0 || sigprocmask (SIG_BLOCK, &set, &oset)) return seconds; act.sa_handler = sleep_handler; act.sa_flags = 0; act.sa_mask = oset; /* execute handler with original mask */ if (sigaction (SIGALRM, &act, &oact) < 0) return seconds; before = time ((time_t *) NULL); remaining = alarm (seconds); if (remaining > 0 && remaining < seconds) { /* The user's alarm will expire before our own would. Restore the user's signal action state and let his alarm happen. */ (void) sigaction (SIGALRM, &oact, (struct sigaction *) NULL); alarm (remaining); /* Restore sooner alarm. */ sigsuspend (&oset); /* Wait for it to go off. */ after = time ((time_t *) NULL); } else { /* Atomically restore the old signal mask (which had better not block SIGALRM), and wait for a signal to arrive. */ sigsuspend (&oset); after = time ((time_t *) NULL); /* Restore the old signal action state. */ (void) sigaction (SIGALRM, &oact, (struct sigaction *) NULL); } /* Notice how long we actually slept. */ slept = after - before; /* Restore the user's alarm if we have not already past it. If we have, be sure to turn off the alarm in case a signal other than SIGALRM was what woke us up. */ (void) alarm (remaining > slept ? remaining - slept : 0); /* Restore the original signal mask. */ (void) sigprocmask (SIG_SETMASK, &oset, (sigset_t *) NULL); /* Restore the `errno' value we started with. Some of the calls we made might have failed, but we didn't care. */ __set_errno (save); return slept > seconds ? 0 : seconds - slept; } | https://sourcecodebrowser.com/glibc/2.9/sysdeps_2posix_2sleep_8c.html | CC-MAIN-2017-51 | refinedweb | 283 | 68.97 |
This project explores the ability to have LED bars with an animation that hands off from one bar to the next. They are WiFi connected, so the bars can be anywhere in the world, and the animation will move between them, creating a shared experience over distance. They can also be used in one location to avoid the need to run LED signal wiring between them - each bar only needs a wall adapter to work.
These bars use Particle Core and Photon processors and WS2812b LEDs. The concern was that handing off an animation from one bar to the next would take too long, but the Particle.io Publish and Subscribe system worked fast enough for basic animations.
Step 1: Parts
The following parts were used in the project for each strip:
- A non-waterproof WS2812b LED strip. I used 30/meter. The non-waterproof ones usually have double-sided tape already attached to them so they are easy to mount. You will need 1 meter per channel since the channels are a meter long. The link above is for one meter, but you can also get 5m strips and cut them up if you are making several. More LEDs per meter is fine - just make sure to get a correspondingly large power supply. Each (5050) LED in these strips can use up to 60ma when fully on.
- Plastic electronic project box 60x36x25mm - this one is small enough to hold a Particle Photon.
A female panel mount 5.5mm x 2.1mm DC jack
- a 5v power supply - a 2 amp one should be fine with 30 LEDS @ 0.06 amp each when full on.
- Particle Photon - without headers if you want to solder the wires directly on. Note that this embedded controller with WiFi was previously called a Spark Core.
-. You can get it on Amazon too, but it's more than you need.
- double sided foam tape - 1/2" wide.
- 1000 uf capacitor - recommended for each strip, to help prevent voltage spikes from damaging the LEDs.
- Hookup wire. This 26 gauge silicone wire is very flexible and helps keep the wire from pulling the soldering pads off the LED strip. This one is just three colors - all you need. I have also used servo wire which is also very flexible, but silicone wire is my new favorite wire.
- Jumper wires - the female red, black, and yellow can be used to connect to the CPU if it has pins on it.
- A 330 ohm resistor to reduce noise in the LED strip data line.
- A 1N4448 Signal Diode or similar to allow the 3.3v processor to reliably drive the 5v LED strip.
- 3mm heat shrink tubing
Step 2: Attaching the Box to the Strip
To house the CPU, I used a small project box at the end of the strip. I did try a center-mount option (more info later), but the one on the end looks the best. To connect the project box to the strip, use a 6" piece of the 1/2" wide by 1/16" thick aluminum bar to run between the box and the strip. The bar is mounted on the bottom of the box so that the lid can be removed and replaced easily.
The box will need two holes to be drilled in it. The first is for the panel mount power jack. That should be centered on the outside end of the box, maybe a bit closer to the lid without hitting it to give a bit more room for the CPU. I used a step drill to get the hole to match the size of the jack. A step drill is really the best and safest way to drill larger holes like this.
On the other side of the box, you need a small hole to allow 3 wires to run from the controller to the strip. You can set the box and aluminum channel with the end cap with the hole on a table to get the right height for the hole.
Once the two holes are all set, the next step is to prep the LED strip.
Step 3: Wiring the Strip
The LEDs will be supplied with 5v dc. The CPU will also have 5v, but it is a 3.3v device and has an internal regulator. The trick is that the data line from the CPU to the LED strip will be at 3.3v, and the LEDs might work, but sometimes will flicker since that voltage is a bit low per the LED spec. An extended discussion is in a previous Instructable.
For this project, we will keep the 5v supply, and not modify the colors to control current usage. You could use a level shifter, and there is more info later on how that looks.
For this project, though, the new trick is to use a diode to drop the voltage to the first LED slightly, and then use that one to drive the rest of the strip. This "sacrificial LED" approach is discussed here and here. Note that we will still be using that first LED, but it will be very slightly dimmer - mostly unnoticeable. This is a very compact solution to the problem, and only requires a single diode to work.
So, for this strip, we will cut the first LED off the strip and separate that first one from the rest by a short distance. The Data and Gnd wires will simply have a jumper across that gap. Then the +5 red wire will be connected to the second LED with a diode connecting back to the first LED. In that way, the first LED will have 5v - ~0.8v (diode forward voltage) = ~4.2v as the supply, which is within the 70% recommendation for the LED for the data vs power lines. That first LED will pass the data to the rest at that ~4.2v, so all the rest can run at 5v.
This approach leaves that short gap distance between the first and second LEDs so they are not spaced exactly the same as the others. It's not very obvious unless you are looking for it. I think this is the easiest way to wire it, but I did try a different approach. For that, I cut off a slice from the end of the first LED so that there was still a gap, but the LED to LED spacing would remain. That one requires the Gnd and Data lines to have a jumper from the front of the first LED to the second, so it's a bit more wire, but does work - see the pictures for that.
Once that is complete, the LED can be placed in the channel.
The LED strip should have double sided tape on it so that it can be mounted to the channel. Test fit it first to get the position just right - you will need a small amount of room at the end for the end cap. Putting a small piece of tape under the first couple LEDs where you soldered the wires will insulate the soldering joints just in case. Make sure to have the input side of the LED strip next to the box where the controller will be.
You can now add the cover and end caps, but since they slide on, you only need the end cap with the hole for now. The box can now be attached to the channel with the 6" piece piece of aluminum bar. Cut a 6" piece of double sided foam tape and attach it to the bar. I cut a notch in the tape where the box has a ridge on the edge (see pictures). Run the wires into the box, and mount the box to the channel.
I used a label maker to make a small 5v label under the jack to avoid future confusion.
Step 4: Wiring the CPU
The are two ways to attach the connectors to the controller board. One is to solder the wires directly to the board for a fairly permanent project. You can also solder push connectors on the end, and just push those on the header pins on the board.
The power jack +5 and Ground are connected to the strip power, and the controller power. If you use a different controller, make sure it accepts a 5v power source. It is best to add a capacitor to the power source - something like a 1000 uf electrolytic one. This is recommended for the LEDs since they can be susceptible to power surges from the wall adapter. On the power jack I had, the longer pins was the outside ground. It's worth confirming that with yours.
The final connection is from the controller data out pin to the data in pin on the LEDs strip. I used a 33 ohm resistor in series with that connection to reduce noise in the line. The use of the capacitor and resistor is noted in many of the LED project sites like the Adafruit NeoPixel Überguide.
Step 5: Programming
The following code was used in the video in the Introduction step. It was a basic test of moving a dot between three of the bars. It's mostly a copy of the sample code and is not very elegant.
In the Setup function, we Subscribe to the led_mesh_handoff message that will come from the other LED strips. Now whenever another strip publishes that message, the myHandler function will run.
In the Loop, the basic idea is that the dot_pos variable will step from 0 to 29 (30 LEDs in the strip). When it hits the end, it will Publish a message for the next strip. That message is the id of the next strip, and for the sample code, this was the middle strip (#2), so the next one is #3, or "003". If dot_pos = 255, then we are just waiting.
In the message handler for messages received, we simply check to see if the id is for us (in this case #2, or "002"), and if so, set the dot_pos to zero for the Loop to move along.
That's it - very simple, and you can see how easy Publish and Subscribe can be.
This code is very simple - the hard coded IDs need to be changed for each strip, which is not hard, and the advantage is that they will always be in the same order. Another approach would be to have them auto-discover each other, but that will take more code to deal with timing issues. Another Instructable!
* This is a minimal example, see extra-examples.cpp for a version
* with more explantory documentation, example routines, how to * hook up your pixels and all of the pixel types that are supported. * */
#include "application.h" #include "neopixel/neopixel.h"
SYSTEM_MODE(AUTOMATIC);
// IMPORTANT: Set pixel COUNT, PIN and TYPE #define PIXEL_PIN D2 #define PIXEL_COUNT 30 #define PIXEL_TYPE WS2812B
uint8_t dot_pos = 0; uint16_t wait = 50;
Adafruit_NeoPixel strip = Adafruit_NeoPixel(PIXEL_COUNT, PIXEL_PIN, PIXEL_TYPE);
void setup() { strip.begin(); strip.show(); // Initialize all pixels to 'off' Particle.subscribe("led_mesh_handoff", myHandler);
} void loop() { uint16_t i, j; if (dot_pos != 255) { for(i=0; i<strip.numPixels(); i++) { strip.setPixelColor(i, 0, 0, 0); } strip.setPixelColor(dot_pos, 0, 127, 127); strip.show(); dot_pos++; if (dot_pos == PIXEL_COUNT) { strip.setPixelColor(dot_pos-1, 0, 0, 0); strip.show(); delay(wait); Particle.publish("led_mesh_handoff","003"); dot_pos = 255; //delay(1000); } else { // delay if we are not switching to a new strip delay(wait); } } } // loop
// Now for the myHandler function, which is called when the cloud tells us that our buddy's event is published. void myHandler(const char *event, const char *data) { // Spark.subscribe handlers are void functions, which means they don't return anything. // They take two variables-- the name of your event, and any data that goes along with your event. // In this case, the event will be "buddy_unique_event_name" and the data will be "intact" or "broken" // // Since the input here is a char, we can't do // data=="intact" // or // data=="broken"
// chars just don't play that way. Instead we're going to strcmp(), which compares two chars. // If they are the same, strcmp will return 0. if (strcmp(data,"002")==0) { dot_pos = 0; }
//if (strcmp(data,"intact")==0) { // if your buddy's beam is intact, then turn your board LED off //digitalWrite(boardLed,LOW); //} //else if (strcmp(data,"broken")==0) { // if your buddy's beam is broken, turn your board LED on //digitalWrite(boardLed,HIGH); //} //else { // if the data is something else, don't do anything. // Really the data shouldn't be anything but those two listed above. //} } // myHandler
Step 6: Level Shifter Variant
The first prototype of this project used an SN74HCT125N level shifter to make the 3.3v LED data wire control from the Spark Core up to 5v that the LED strip is looking for. In the end, this larger box and parts were not needed, but the build pictures may be useful for other projects that need more circuitry in the box. I used a wider bar and cut it into a shape that matched the box and strip.
Step 7: Trinket Variant
You can also use a small controller without Wifi, like the Adafruit Trinket. The mounting and connections are about the same, and the idea is to just have a fixed pattern running for holiday or party lighting that can be easily moved and setup. The channels come with mounting clips to make seasonal setup easier.
You could add a switch to change the lighting modes. See the next step for an example.
Step 8: Mounting in the Middle Variant
One other test was to mount the box in the middle of the strip to keep the ends clear. In the pictures the box is also painted white (did not get around to painting the lid). To run the wires, a hole was drilled in the strip and the middle of the box bottom. The power wires can just be connected to the LED strip in the middle, and the data wire is run along the side to the start of the strip. I can't say this is a great place to mount the box - perhaps in certain situations it would make sense.
The picture also shows a button, which is not connected, but could be used to switch modes or something. | http://www.instructables.com/id/WiFi-Connected-LED-Bars-With-Shared-Animations/ | CC-MAIN-2017-26 | refinedweb | 2,401 | 80.01 |
Regarding
Developing Simple Struts Tiles Application
will show you how to develop simple Struts Tiles
Application. You will learn how to setup the Struts Tiles and create example
page with it.
What is Struts...
Developing Simple Struts
redirect with tiles - Struts
specify in detail and send me code.
Thanks. I using tiles in struts 2. And i want redirect to other definition tag by url tag. Please help me...redirect with tiles I have definition with three pages. I have link
tiles - Struts
tiles hi friends i got some problem regarding the struts tile
i want to insert one header,body,footer,menu tile but i am not getting the body... but it is not displaying here
composemail check it is placed at the below of the menu if...://
Hope that it will be helpful for you... code written in tiles definition to execute two times and my project may has
Hi.. - Struts
Hi.. Hi,
I am new in struts please help me what data write...-html.tld,struts-logic.tld,struts-nested.tld and what is important of this file.....its very urgent Hi Soniya,
I am sending you a link. This link
tiles - Struts
Tiles in Struts Example of Titles in Struts Hi,We will provide you the running example by tomorrow.Thanks Hi,We will provide you the running example by tomorrow.Thanks
Using tiles-defs.xml in Tiles Application
. In this section I will show you how to eliminate the need of
extra jsp file using tiles...
Using tiles-defs.xml in Tiles Application
... be
used to apply to the content. In this section I will show you how to use
hi - data hibernate please tell me.....if i am using hibernet with struts any database pkg is required or not.....without... me Hi Soniya,
I am sending you a link. I hope that, this link
tiles - Struts
Struts Tiles I need an example of Struts Tiles
Hi - Struts
tiles using struts2
tiles using struts2 hello,
im implementing tiles using struts2 in eclipse. i am having following problem occurred during execution.i have created.../struts-tiles.tld
org.apache.jasper.compiler.DefaultErrorHandler.jspError
JSP and Servlet did not run - JSP-Servlet
Thanks Hi,
Thanks for your reply - I noticed that you have changed...JSP and Servlet did not run I tried to run this program but when I... what I wanted.
Anyone can trace what I made wrong here
what is struts? - Struts
what is struts? What is struts?????how it is used n what it hibrenate???etc can i use it on eclipse environment???? The core... to survive. Struts helps you create an extensible development environment
Session management using tiles
Session management using tiles hi i am working on elearning project ..my problem is i am not able to maintain session over login's page..suppose if i logged in one user..and if i open another tab and logged in another account
Hi
Hi Hi All,
I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance.
Regards,
Deep...);
out.println("<b>Xml File Created Successfully< tiles framework
struts tiles framework how could i include tld files in my web application
Struts Tutorials
application development using Struts. I will address issues with designing Action... as popular as Struts. If you need to brush up your knowledge on JUnit I can recommend... Struts Application
Did the title of the article make you curious
JSP Parse did not work - Java Server Faces Questions
:
=========================================================================
But this donest give me what I wanted..How do you...JSP Parse did not work I have these codes...("Feedback");
out.print("Selected Values are: ");
for(int i=0;i
DropDown
DropDown
What is XML?
on "What is
XML?" you will be able understand the XML document and create well formatted XML
document.
What is XML Document?
Some facts about XML...What is XML?
In the first section of XML tutorials we will learn the basics
hi... - Struts
also its very urgent Hi Soniya,
I am sending you a link. I hope...hi... Hi Friends,
I am installed tomcat5.5 and open the browser and type the command but this is not run please let me
Hi... - Struts
Hi... Hello Friends,
installation is successfully
I am instaling jdk1.5 and not setting the classpth in enviroment variable please write the classpath and send classpath command Hi,
you set path = C
Regarding tiles - Struts
Regarding tiles I am taken image from Database.So, i am already... the session, its also shown. And I am also created one tiles for calling that image in the JSP, and insert the tiles in the respective papes, In which I want
Hi... - Struts
Hi... Hello,
I want to chat facility in roseindia java expert...;Firstly you open the browser and type the following url in the address bar... window
And you chat with expert programmer
Hi - Struts
Hi Hi Friends,
I want to installed tomcat5.0 version please help me i already visit ur site then i can't understood that why i installed please give some idea for installed tomcat version 5 i have already tomcat 4
Struts Books
covers everything you need to know about Struts and its supporting technologies..., you can reuse proven solutions and focus on what's unique to your own case. Struts...;
Programming Jakarta Struts: Using Tiles
About XML - XML
About XML What are possible ways XML used in j2ee....
Thanks Hi,
See the different uses of XML at.... Thanks Prakash Hi Friend,
1)Apart from those uses,it can be used
About Struts processPreprocess method - Struts
About Struts processPreprocess method Hi java folks,
Help me to know the use of overriding processPreprocess() method .What is the usual scenario... is not valid? Can I access DB from processPreprocess method.
Hi
xml - XML
xml hi
convert xml document to xml string.i am using below code...-an-xml-document-using-jd.shtml
Hope that it will be helpful for you... = stw.toString();
after i am getting xml string result, like
Successxxx profile
Tiles - Struts
Inserting Tiles in JSP Can we insert more than one tiles in a JSP page
struts
struts i have no any idea about struts.please tell me briefly about struts?**
Hi Friend,
You can learn struts from the given link:
Struts Tutorials
What you Really Need to know about Fashion
What you Really Need to know about Fashion
You might think... know about fashion, even if you are not really
interested in following the trends. Fashion might not be important to you, but
fashion is not always about
what are Struts ?
what are Struts ? What are struts ?? explain with simple example.
The core of the Struts framework is a flexible control layer based... professional web application needs to survive. Struts helps you create an extensible
about webapps - Struts
about webapps hi deepak,
I have a query ,i.e.,
If we develop web-application By directory name MyStrutsProject,then we kept this folder... it is possible, where we need to do modifications.
Thanking you
I want detail information about switchaction? - Struts
I want detail information about switch action? What is switch action in Java? I want detail information about SwitchAction
hi
hi what is the code for printing stars as follows
*
* *
* * *
* * * *print("code sample");
Hi Friend,
Try... num=4;
int p = num;
int q = 0;
for (int i = 0; i <= num; i++) {
for (int j
Hi..
Hi.. what are the steps mandatory to develop a simple java program?
To develop a Java program following steps must be followed by a Java... read Java ClassPath
Then in the second step you are
required a Java editor, you
What would you rate as your greatest weaknesses?
is perfect, but from what I have learned from you about this job, I should make... know well enough about the position, you can recount what you like doing best.... It feels as if the interviewer doesn?t believe what you said. It
might also
hi!
hi! how can i write aprogram in java by using scanner when asking... to to enter, like(int,double,float,String,....)
thanx for answering....
Hi...);
System.out.print("Enter integer: ");
int i=input.next... JOptionPane("Do you want to withdraw");
Object[] options = new String[] { "Yes
HI!!!!!!!!!!!!!!!!!!!!!
HI!!!!!!!!!!!!!!!!!!!!! import java.awt.*;
import java.sql....=con.createStatement();
int i=st.executeUpdate("insert into bankdata(name,pass...("Do you want to withdraw");
Object[] options = new String[] { "Yes
using displaytag with struts2 - Struts
using displaytag with struts2 Hi, i am using struts2 framework... that it is not so good specially when i have used displaytag, but using displaytag... and handle all these things by coding.
So, i want to ask you whether do i use
XMl
XMl Hi.
please tell me about that
What characters are allowed in a qualified name?
Thanks
Struts Tutorial
In this section we will discuss about Struts.
This tutorial will contain the various aspects of Struts such as What is Struts, features of struts..., struts integration with other framework,
Struts Validator Framework.
What
Part I
Part I. Understanding XML
A1. Understanding XML :
Learn XML....
XML : An Introduction
What is XML, its importance... additional information about elements.
XML:Validation
How a DTD is used
struts
struts hi
Before asking question, i would like to thank you... technologies like servlets, jsp,and struts.
i am doing one struts application where i... into the database could you please give me one example on this where i i have
XML Interviews Question page10
all the document management benefits of using XML, but you don't have to worry...;
Can I use JavaScript, ActiveX, etc in XML files.... XML is about describing information; scripting languages and languages
Hi - Struts
Hi Hi Friends,
Thanks to ur nice responce
I have sub package in the .java file please let me know how it comnpile in window xp please give the command to compile
When you look back on the position you held last, do you think you have done your best in it?
;
If you say that you did, it could mean that your best is already... developments in your career.
62. Give me a reason why I should hire you when... fishing for what the industry talks about their company- don’t fall
What do you think about client-side/server-side coding ?
What do you think about client-side/server-side coding ? Hi,
What do you think about client-side/server-side coding
Struts Articles
experience using Struts in a Servlet environment and that you want to take advantage... framework. But what about leveraging EJB and Struts together? This tutorial... can be implemented in many ways using Struts and having many developers working
snmp /xml gateway - XML
snmp /xml gateway hi
i would like to develop a snmp management application based on xml
for that i need to construct a snmp/xml gateway which... snmp result into xml
what i need is hoe to use xml as a window application
Struts - Jboss - I-Report - Struts
Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report
Struts
Struts I want to create tiles programme using struts1.3.8 but i got jasper exception help me out
XML - XML
XML What is specific definatio of XML?
Tell us about marits and demarits of XML?
use of XML?
How we can use of XML? Hi,
XML... language much like HTML used to describe data. In XML, tags are not predefined
xml or database - XML
xml or database If I implement some web applications, which is a better way to retrieve and parse xml file directly using DOM or storing those xml files in Database and retrieve. My xml files will be about 100 - 200 and each
Xml Parser
Xml Parser Hi...
please tell me about
What parser would you use for searching a huge XML file?
Thanks
Reg XML - XML
Reg XML How can I become an XML programmer?What are the channels I possess on INTERNET
Thanks & Regards
Ravi Pullela Hi,
XML... u tonight
---------------------------------------
I am sending you
about db - Struts
About DB in Struts I have one problem about database. i am using netbeans serveri glassfish. so which is the data source struts config file should be? Please help me
Struts - Struts
Struts hi,
I am new in struts concept.so, please explain...://
I hope that, this link will help you... .
what are needed the jar file and to run in struts application ?
please kindly
Java + XML - XML
a corresponding xml file to get the value.
I will appriciate if you can help me. Hi friend,
I am sending you a link. This link will help you...Java + XML 1) I have some XML files,
read one xml
Parsing XML using Document Builder factory - XML
Parsing XML using Document Builder factory Hi ,
I am new to XML . I am trying to parse a XML file while is :
For this i am using Document... (what is want is all the values). also how to read the attributes of the tag i am
XML Tutorial
you understand what XML is all about. (You'll learn about XML in later sections... will learn what XML is about. You'll understand the basic XML syntax. An you... Language. In our XML tutorial you will learn what XML is and the difference between | http://www.roseindia.net/tutorialhelp/comment/8468 | CC-MAIN-2015-14 | refinedweb | 2,246 | 75.81 |
.
Setting:
using Sentry; SentrySdk.Init(o => o.Release = "my-project-name@2.3.12");
sentry.Init(sentry.ClientOptions{ Release: "my-project-name@2.3.12", })
Sentry.init({ release: "my-project-name@2.3.12" })
Sentry\init([ 'release' => 'my-project-name@2.3.12', ]);
import sentry_sdk sentry_sdk.init<<. The integration will then send Sentry metadata (such as authors and files changed) about each commit pushed to those repositories..
Associate Commits with a Release
In your release process, add a step to create a release object in Sentry and associate it with commits from your linked.
Troubleshooting
If you receive an “Unable to Fetch Commits” email, take a look at our Help Center Article.
After Associating Commits
After this step, suspect commits and suggested assignees will start appearing on the issue page. We determine these by tying together the commits in the release, files touched by those commits, files observed in the stack trace, authors of those files, and ownership rules.
.
GitHub and Identifying Commit Authors..
. | https://docs.sentry.io/workflow/releases/?utm_source=post&utm_medium=text&utm_campaign=guest-post&utm_term=pagerduty?utm_source=post&utm_medium=text&utm_campaign=guest-post&utm_term=pagerduty | CC-MAIN-2019-39 | refinedweb | 164 | 50.53 |
Mach (German for do) is a command-line interface to help developers perform common tasks. The purpose of mach is to help make the developer experience better by making common tasks easier to discover and perform.
Requirements
Mach requires a current version of mozilla-central (or a tree derived from) (mach was committed on 2012-09-26). Mach also requires Python 2.7. mach itself is Python 3 compliant. However, modules used by mach likely aren't Python 3 compliant just yet. Stick to Python 2.7.
Running
From the root of the source tree checkout, you should just be able to type:
$ ./mach
If all is well, you should see a help message.
For full help:
$ ./mach help
Try building the tree:
$ ./mach build
If you get error messages, make sure that you have all of the build requisites for your system.
If it works, you can look at compiler warnings:
$ ./mach warnings-list
Try running some tests:
$ ./mach xpcshell-test services/common/tests/unit/
Or run an individual test:
$ ./mach mochitest browser/base/content/test/general/browser_pinnedTabs.js
You run mach from the source directory, so you should be able to use your shell's tab completion to tab-complete paths to tests. Mach figures out how to execute the tests for you!
mach and mozconfigs
It's possible to use mach with multiple mozconfig files. mach's logic for determining which mozconfig to use is effectively the following:
- If a .mozconfig file exists in the current directory, use that.
- If the
MOZCONFIGenvironment variable is set, use the file pointed to in that variable.
- If the current working directory mach is invoked with is inside an object directory, the mozconfig used when creating that object directory is used.
- The default mozconfig search logic is applied.
Here are some examples:
# Use an explicit mozconfig file. $ MOZCONFIG=/path/to/mozconfig ./mach build # Alternatively (for persistent mozconfig usage): $ export MOZCONFIG=/path/to/mozconfig $ ./mach build # Let's pretend the MOZCONFIG environment variable isn't set. This will use # the mozconfig from the object directory. $ cd objdir-firefox $ mach build
Adding mach to your shell's search path
If you add mach to your path (by modifying the
PATH environment variable to include your source directory, or by copying
mach to a directory in the default path like
/usr/local/bin) then you can type
mach anywhere in your source directory or your objdir. Mach expands relative paths starting from the current working directory, so you can run commands like
mach build . to rebuild just the files in the current directory. For example:
$ cd browser/devtools $ mach build webconsole # Rebuild only the files in the browser/devtools/webconsole directory $ mach mochitest-browser webconsole/test # Run browser-chrome tests from browser/devtools/webconsole/test
Enable tab completion
To enable tab completion in
bash, run the following command. You can add the command to your
.profile so it will run automatically when you start the shell:
source /path/to/mozilla-central/python/mach/bash-completion.sh
This will enable tab completion of mach command names, and in the future it may complete flags and other arguments too. Note: Mach tab completion will not work when running mach in a source directory older than Firefox 24.
For zsh, you can call the built-in bashcompinit function before sourcing:
autoload bashcompinit bashcompinit source /path/to/mozilla-central/python/mach/bash-completion.sh
Frequently Asked Questions
Why should I use mach?
You should use mach because it provides a better and more unified developer experience for working on Mozilla projects. If you don't use mach, you have to find another solution for the following problems:
- Discovering what commands or make targets are available (mach exposes everything through
mach help)
- Making more sense out of command output (mach offers terminal colorization and structured logging)
- Getting productive tools in the hands of others (mach advertises tools to people through
mach help- people don't need to discover your tool from a blog post, wiki page, or word of mouth)
Are there any known issues?
Several. Mach is still relatively young and there are a number of bugs and numerous areas for improvement. Some larger known issues include:
- MinTTY (alternative terminal emulator on Windows) doesn't work
- Text encoding issues (especially on Windows where Latin-1 is not the default system encoding).
- Failed commands spew lots of extra error output (e.g. you will see a big mach error message when all that happened was an invoked command returned a non-0 exit code (possibly expectedly).
Generally, mach is known to work pretty well without issues for most people.
How do I report bugs?
Bugs against the mach core can be filed at.
Most mach bugs are bugs in individual commands, not bugs in the core mach code. Bugs for individual commands should be filed against the component that command is related to. For example, bugs in the build command should be filed against Core :: Build Config. Bugs against testing commands should be filed somewhere in the Testing product.
How is building with mach different from building with client.mk, from using make directly?
Currently,
mach build simply invokes client.mk. There are no differences in terms of how the build is performed (well, at least there shouldn't be).
Mach does offer some additional features over manual invocation of client.mk:
- If on Windows, mach will automatically use pymake instead of GNU make, as that is preferred on Windows.
- mach will print timings with each line of output from the build. This gives you an idea of how long things take.
- mach will colorize terminal output (on terminals that support it - typically most terminals except on Windows)
- mach will scan build output for compiler warnings and will automatically record them to a database which can be queried with
mach warnings-listand
mach warnings-summary. Not all warnings are currently detected. Do not rely on mach as a substitute for raw build output.
- mach will invoke make in silent mode. This suppresses excessive (often unncessary) output.
Is mach a build system?.
Does mach work with mozconfigs?
Yes! You use mozconfigs like you have always used them.
Does mach have its own configuration file?
Not yet. It will likely have one some day.
Should I implement X as a mach command? make target. The build team generally does not like one-off make targets that aren't part of building (read: compiling) the tree. This includes things related to testing and packaging. These weigh down make files and add to the burden of maintaining the build system. Instead, you are encouraged to implement ancillary functionality in not make (preferably Python). If you do implement something in Python, hooking it up to mach is often trivial (just a few lines of proxy code).
How does mach fit into the modules system?
Mozilla operates with a modules governance system where there are different components with different owners. There is not currently a mach module. There may or may never be one. Mach is just a generic tool. The mach core is the only thing that could fall under perview of a module and an owner.
Even if a mach module were established, mach command modules (see below) would likely never belong to it. Instead, mach command modules are owned by the team/module that owns the system they interact with. In other words, mach is not a power play to consolidate authority for tooling. Instead, it aims to expose that tooling through a common, shared interface.
Who do I contact for help or to report issues?
The maintainer of mach is Gregory Szorc (gps on IRC or gps@mozilla.com). You can also ask questions in #mach, #developers. Or, if you say mach in any IRC channel gps is in, he will probably notice.
Can I use mach outside of mozilla-central?
Yes! The mach core is in mozilla-central inside the python/mach directory and available on PyPI at.
Can I use mach with B2G?
Yes again! After you have cloned the B2G repo run config.sh to pull in gecko. The bootstrap script will then scan the gecko repository for mach commands that also apply to the B2G repo. There's also an external module you can install called b2g-commands which provides B2G specific mach commands.
Is there a logo for mach?
Not yet. gps would like the logo to be of a unicorn breaking the sound barrier (mach speed) in front of a rainbow. Contributions are welcome.
mach Architecture
Under the hood mach is a generic command dispatching framework which currently targets command line interfaces (CLIs). You essentially have a bunch of Python functions saying "I provide command X" and mach hooks up command line argument parsing, terminal interaction, and dispatching.
There are 3 main components to mach:
- The mach core.
- Mach commands
- The mach driver
The mach core is the main Python modules that implement the basic functionality of mach. These include command line parsing, a structured logger, dispatching, and utility functions to aid in the implementation of mach commands.
Mach commands are what actually perform work when you run mach. Mach has a few built-in commands. However, most commands aren't part of mach itself. Instead, they are registered with mach.
The mach driver is the mach command line interface. It's a Python script that creates an instance of the mach core, registers commands with it, then tells the mach core to execute.
The canonical source repository for the mach core is the python/mach directory in mozilla-central. The main mach routine lives in main.py. The mach driver is the mach file in the root directory of mozilla-central. As you can see, the mach driver is a shim that calls into the mach core.
As you may have inferred, mach is implemented in Python. Python is our tooling programming language of choice at Mozilla. Mach is also Python 3 compliant (at least it should be).
Adding Features to mach
Most mach features come in the form of new commands. Implementing new commands is as simple as writing a few lines of Python and registering the created file with mach.
The first step to adding a new feature to mach is to file a bug. You have the choice of filing a bug in the
Core :: mach component or in any other component. If you file outside of
Core :: mach, please add
[mach] to the whiteboard.
Mach is relatively new and the API is changing. So, the best way to figure out how to implement a new mach command is probably to look at an existing one.
Start by looking at the source for the mach driver. You will see a list defining paths to Python files (likely named
mach_commands.py). These are the Python files that implement mach commands and are loaded by the mach driver. These are relative paths in the source repository. Simply find one you are interested in and dig in!
mach Command Providers
A mach command provider is simply a Python module. When these modules are loaded, mach looks for specific specific signatures to detect mach commands. Currently, this is implemented through Python decorators. Here is a minimal mach command module:
from __future__ import print_function, unicode_literals from mach.decorators import ( CommandArgument, CommandProvider, Command, ) @CommandProvider class MachCommands(object): @Command('doit', description='Run it!') @CommandArgument('--debug', '-d', action='store_true', help='Do it in debug mode.') def doit(self, debug=False): print('I did it!')
From
mach.decorators we import some Python decorators which are used to define what Python code corresponds to mach commands.
The decorators are:
- @CommandProvider
- This is a class decorator that tells mach that this class contains methods that implement mach commands. Without this decorator, mach will not know about any commands defined within, even if they have decorators.
- @Command
- This is a method decorator that tells mach that this method implements a mach command. The arguments to the decorator are those that can be passed to the
argparse.ArgumentParserconstructor by way of sub-commands.
- @CommandArgument
- This is a method decorator that tells mach about an argument to a mach command. The arguments to the decorator are passed to
argparse.ArgumentParser.add_argument().
The class and method names can be whatever you want. They are irrelevant to mach.
An instance of the
@CommandProvider class is instantiated by the mach driver if a command in it is called for execution. The
__init__ method of the class must take either 1 or 2 arguments (including
self). If your class inherits from
object, no explicit
__init__ implementation is required (the default takes 1 argument). If your class's
__init__ takes 2 arguments, the second argument will be an instance of
mach.base.CommandContext. This object holds state from the mach driver, including the current directory, a handle on the logging manager, the settings object, and information about available mach commands.
The arguments registered with @CommandArgument are passed to your method as keyword arguments using the
**kwargs calling convention. So, you should define default values for all of your method's arguments.
The return value from the @Command method should be the integer exit code from the process. If not defined or None, 0 will be used.
Registering mach Command Providers
Once you've written a Python module providing a mach command, you'll need to register it with mach. There are two ways to do this.
If you have a single file, the easiest solution is probably to register it as a one-off inside
build/mach_bootstrap.py. There should be a Python list of paths named
MACH_MODULES or similar. Just add your file to that list, run
mach help and your new command should appear!
Submitting a mach Command for Approval
Once you've authored a mach command, submit the patch for approval. Please flag gps@mozilla.com for review.
Mach Command Modules Useful Information
Command modules are not imported into a reliable Python package/module "namespace." Therefore, you can't rely on the module name. All imports must be absolute, not relative.
Because mach command modules are loaded at mach start-up, it is important that they be lean and not have a high import cost. This means that you should avoid global
import statements as much as possible. Instead, defer your import until inside the
@Command decorated method.
Mach ships with a toolbox of mix-in classes to facilitate common actions. See
python/mach/mach/mixin. If you find yourself reinventing the wheel or doing something you feel that many mach commands will want to do, please consider authoring a new mix-in class so your effort can be shared! | https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/mach | CC-MAIN-2016-30 | refinedweb | 2,443 | 66.23 |
From: John Maddock (john_at_[hidden])
Date: 2005-04-02 05:13:26
>>Those are linker errors not compiler errors, It means it compiled fine
>>but it can't find an implementation for those.
>>
>>CodeWarrior is much stricter on const types we've found that several
>>times with Boost Library.
>>The non-LazyPtr is often something called extern but the initialization
>>is not done.
>>
> The const types clue sounds promising. I don't follow his non-LazyPtr
> point.
Actually looking again at the errors I think his suggestions are
red-herrings, the errors are all from wide character facets, maybe you could
see if something like the program below would link?
John.
#include <locale>
int main()
{
std::locale l;
const std::ctype<wchar_t>& ct = std::use_facet<std::ctype<wchar_t> >(l);
return !ct.is(std::ctype<wchar_t>::lower, L'a');
} | https://lists.boost.org/boost-testing/2005/04/0585.php | CC-MAIN-2019-43 | refinedweb | 138 | 54.93 |
On Nov 12, 2003, at 3:45 AM, Brian Warner wrote: >> While we're at it - warner, could we have the mac os x reactor also >> run >> tests with cfreactor? > > My tests failed: > > 24:buildbot at quartz% python2.2 bin/trial -to -r cf twisted.test > ... > File > "/Users/buildbot/Buildbot/slave/OSX-full2.2/Twisted/twisted/internet/ > cfreactor.py", line 42, in ? > import cfsupport as cf > ImportError: Failure linking new module > > > It might be related to the following warning during the compilation > phase: > > gcc -arch ppc -bundle -flat_namespace -undefined suppress > build/temp.darwin-6.8-Power Macintosh-2.2/cfsupport.o -o > twisted/internet/cfsupport.so -framework CoreFoundation -framework > CoreServices -framework Carbon > ld: warning dynamic shared library: /usr/lib/libSystem.dylib not made > a weak library in output with MACOSX_DEPLOYMENT_TARGET environment > variable set to: 10.1 > > > It doesn't hang, so I'm going to turn on the test anyway (it will just > fail > all the time). I don't know a lot about OS-X linker magic.. somebody > else > will have to figure out the problem here. I will not EVER support that version of Python 2.2. You'll need to use python2.3 ( ) or upgrade the machine to OS X 10.3. If it works under that version of python, it's purely by chance. -bob -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2357 bytes Desc: not available Url : | http://twistedmatrix.com/pipermail/twisted-python/2003-November/006378.html | CC-MAIN-2016-26 | refinedweb | 241 | 59.8 |
JSON, HTTP, and React go hand in hand. Odds are, you will need to interact with a JSON array of data at some point while building your React app.
In this guide, you will learn how to use the Axios HTTP client to request JSON data from an API. Then, you will learn how to receive a JSON response, loop over the data this response contains, and set the state of your React component based on this data
Let's get started!
Note: This guide assumes that you are working within a React app that has been created using
create-react-app.
The first part of looping over a JSON array involves actually requesting the data! To request data from an API, we must start by using an HTTP client. In this guide, we have chosen Axios as our HTTP client library of choice. There are many alternatives! Fetch is a great, native client that you can use if you are trying to restrict the number of dependencies you are using in your app.
To download Axios, navigate to your root project directory where your
package.json file is located, and run the command:
npm install --save axios. Now you have Axios installed inside of your React project and can start using it!
In the code below, you will see the basic shell of a
SolarSystem component. This component's job is to request a list of planets from the server via HTTP and then render a list of
Planet components based on the response. The number of gas giants will also need to be counted by looping over the JSON array contained within the response.
1class SolarSystem Extends React.Component { 2 constructor(props) { 3 super(props); 4 5 this.state = { 6 planets: [], 7 gasGiantsCount: 0 8 } 9 } 10 11 componentDidMount() { 12 ... 13 } 14 15 render() { 16 ... 17 } 18}
First, you will need to set the
planets state of the component based on a list of planets requested within the
componentDidMount lifecycle method. The
componentDidMount lifecycle method will ensure that the planet's data is only requested after the initial render of the
SolarSystem component. This is helpful because you can display a simple loading screen while you wait for the request to complete.
Below is the updated code that shows how to use Axios to make a request to the private planets API.
1import axios from 'axios'; 2 3class SolarSystem Extends React.Component { 4 constructor(props) { 5 super(props); 6 7 this.state = { 8 planets: [], 9 gasGiantsCount: 0 10 } 11 } 12 13 async componentDidMount() { 14 const { data: planets } = await axios.get(this.props.planetApiUrl), 15 gasGiantsCount = planets.filter(planet => planet.isGasGiant).length; 16 17 this.setState({ planets, gasGiantsCount }); 18 } 19 20 render() { 21 ... 22 } 23}
Wow, that was easy! In the above code, you first imported the Axios HTTP client library. Then, you marked
componentDidMount as an
async function. This enables the use of the
await keyword in order to resolve the
Promise returned from the call to
axios.get.
Once the response successfully loaded, you then had access to an Axios response object. One of the nice things about Axios is that it automatically parses the JSON for you! You can access this JSON on the
data field of the response object. In the above code, we make use of JavaScript's object destructuring capabilities to grab the
data field off of the response object and rename it to
planets. In this case, the
data field comprises our already-parsed JSON array.
With access to the JSON array you need, you then looped over it using one of JavaScript's Array prototype functions. In this case, you can see that your code loops over the planets and counts the number of gas giants in the solar system.
Finally, you then used the
setState function in order to set the
planets and
gasGiantsCount state of the component.
With your data requested, now it's time to load your view. Remember, this component simply wants to display a list of
Planet components. This can be achieved by implementing the following
render function.
1 ... 2 3 render() { 4 const { planets, gasGiantsCount } = this.state; 5 6 const planetComponents = planets.map(planet => <Planet name={planet.name} type={planet.type} />); 7 8 return ( 9 <div> 10 <h2>{gasGiantsCount}</h2> 11 <ul>{planetComponents}</ul> 12 </div> 13 ) 14 } 15}
Working with JSON arrays is a crucial skill when creating any web app. In React, you can make working with JSON arrays easy by using an HTTP client like Axios to help you with things like the automatic parsing of JSON. Tools like Axios will help make your job easier by letting you write less code!
Of course, requesting JSON isn't the only thing Axios can help you out with. Eventually, you will want to send JSON arrays within your React app as well! For more information regarding Axios please check out the Axios documentation. | https://www.pluralsight.com/guides/set-state-of-react-component-by-looping-over-a-json-array | CC-MAIN-2022-40 | refinedweb | 820 | 65.32 |
13 January 2012 09:45 [Source: ICIS news]
GUANGZHOU (ICIS)--?xml:namespace>
The project is expected to cost around $2.5bn (€1.95bn) and is most likely to be built in the southern port city of
The refinery is expected to come on stream in 2015-2016, according to the source.
Zhuhai Zhenrong will benefit from tax breaks as well as favourable land use and exchange rate policies if the firm chooses to build the new refinery at Dawei, according to the source.
The firm will partner with two Myanmar-based firms - a government-linked firm and a private company - to build the new refinery in Dawei, the source said.
The $50bn Dawei Special Industrial Zone will include a deepwater port, an oil refinery and petrochemical units, according to media reports.
Additional reporting by Nurluqman Surat | http://www.icis.com/Articles/2012/01/13/9523453/chinas-zhuhai-zhenrong-plans-to-build-new-refinery-in-myanmar.html | CC-MAIN-2014-35 | refinedweb | 136 | 69.82 |
adaptify 1.0.0
Adaptify #
A library for adaptive decision-making with Dart. It supports the Dart VM and the browser with dart2js. This library was initially created for my master's thesis. Also, a second library for code distribution, called Code Mobility, was developed. For more information on this topic see the blog post on the blog of my employer inovex GmbH.
Description #.
Development #
For feedback and bug reports just open an issue. Feel free to fork this project, create pull request and contact me for any questions.
Documentation #
The features are explained in the dartdoc documentation and the example implementations.
License #
Adaptify is licensed under the BSD License.
Changelog #
1.0.0 #
- First public release version with basic features
0.0.1 #
- Initial version
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: adaptify: ^1.0.0
2. Install it
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
3. Import it
Now in your Dart code, you can use:
import 'package:adaptify/adaptify. | https://pub.dev/packages/adaptify | CC-MAIN-2020-16 | refinedweb | 196 | 59.9 |
# $NetBSD: unif.awk,v 1.4 2008/04/29 06:53:01 martin Exp $ # Copyright (c) 2003 The NetBSD Foundation, Inc. # All rights reserved. # # This code is derived from software contributed to The NetBSD Foundation # by David Laif' lines of file # # usage: awk -f unif.awk -v defines=varlist file # # looks for blocks of the form: # # .if var [|| var] # ... # .else # ... # .endif # # and removes the unwanted lines # There is some error detection... BEGIN { split(defines, defns) for (v in defns) deflist[defns[v]] = 1 delete defns nested = 0 skip = 0 } /^\.if/ { nested++ else_ok[nested] = 1 if (skip) next for (i = 2; i <= NF; i += 2) { if ($i in deflist) next if ($(i+1) != "" && $(i+1) != "||") exit 1 } if (!skip) skip = nested next } /^\.else/ { if (!else_ok[nested]) exit 1 else_ok[nested] = 0 if (skip == nested) skip = 0 else if (!skip) skip = nested next } /^\.endif/ { if (nested == 0) exit 1 if (skip == nested) skip = 0 nested-- next } { if (skip == 0) print } END { if (nested != 0) exit 1 } | http://cvsweb.netbsd.org/bsdweb.cgi/src/distrib/utils/sysinst/Attic/unif.awk?rev=1.4&content-type=text/x-cvsweb-markup | CC-MAIN-2015-48 | refinedweb | 164 | 77.74 |
Corrected PKGBUILD to include working source mirror and also implemented freetype2.patch to correct freetype2 related errors and allow for successful build and use.
Search Criteria
Package Details: fbsplash 1.5.4.4-17
Dependencies (11)
-)
- gpm
- lcms
- libjpeg (mozjpeg-git, mozjpeg, libjpeg-turbo)
- libmng
- libpng (libpng-git)
- miscsplashutils
- fbsplash-extras (optional) – additional functionality like daemon icons
- linux-fbcondecor (optional) – enable console background images
- python (optional) – convert themes from splashy to fbsplash
- uswsusp-fbsplash (optional) – suspend to disk with fbsplash
Required by (3)
Sources (9)
Latest Comments
steftim commented on 2018-03-20 08:22
nemesys commented on 2016-10-19 21:35
Det commented on 2016-08-12 18:12
Disowned. If it still works, and you still prefer over Plymouth, you should adopt.
ironman820 commented on 2016-08-12 18:10
Latest version of Freetype2 moved the old Freetype packages to a subdirectory.
I had to change the lines that handle that to:
#Fix freetype => freetype2
sed -i 's|\(#include <freetype\)/|\12/freetype/|' src/libfbsplashrender.c
sed -i 's|\(#include <freetype\)/|\12/freetype/|' src/ttf.h
sed -i 's|\(#include <freetype\)/|\12/freetype/|' src/ttf.c
tomkwok commented on 2014-09-27 13:39
Updated with patches from alex.syrel
alex.syrel commented on 2014-02-04 21:16
To build without such errors:
ttf.c:28:30: fatal error: freetype/ftoutln.h: No such file or directory
#include <freetype/ftoutln.h>
compilation terminated.
Edit PKGBUILD and in section build() before make add:
#Fix freetype => freetype2
sed -i 's:\(#include <freetype\)/:\12/:' src/libfbsplashrender.c
sed -i 's:\(#include <freetype\)/:\12/:' src/ttf.h
sed -i 's:\(#include <freetype\)/:\12/:' src/ttf.c
ShadowKyogre commented on 2014-01-13 18:29 :).
Aspiring commented on 2013-12-11 05:45
configure.ac:59: error: possibly undefined macro: AC_MSG_ERROR
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
autoreconf: /usr/bin/autoconf failed with exit status: 1
cmsigler commented on 2013-09-28 00:19
Hi,
I hope I've come up with a solution more or less done The Arch Way (TM) now. I've posted an updated PKGBUILD here:
This incorporates the current version of freetype2 (2.5.0.1-1), so I use the patches from freetype2 in ABS, which are here:
I also make some patches to splashutils 1.5.4.4, which are here:
With this PKGBUILD and these patches, as well as the other files and patches provided with the current version of fbsplash, this will hopefully build and work correctly. I've been testing it for a few days with no problems.
HTH.
Clemmitt
cmsigler commented on 2013-09-10 14:28
@Parakoopa:
This was a bit of a pain to diagnose. When I tried to rebuild, I kept getting this error message:
"/usr/bin/ld: cannot find -lfreetype"
But the libraries for freetype2 were installed. Happily, Ubuntu forums bought me a clue:
fbsplash needs to link against the static library, libfreetype.a, but this is no longer provided by freetype2 since its upgrade to version 2.5. So, we need to install a local alternative package to freetype2. Here's the PKGBUILD I use for what I've called freetype2-static 2.5.0.1-1:
Once freetype2-static is built and installed, patches are needed to get fbsplash to build against lcms2, since libmng version 2 builds against lcms2, not the older lcms version 1. I also included fixes to get autoconf/automake to stop complaining. This is the patch I've put together:
Finally, here's a patch to update fbsplash's PKGBUILD:
Using a static library-built freetype2 package along with this revised PKGBUILD and new patch to get fbsplash to build against lcms2 should get things working again. After all of this, I was able to rebuild uswsusp-fbsplash and it seemed to work fine.
HTH.
Clemmitt
Anonymous comment on 2013-08-11 12:09
Doesn't work anymore because of freetype 2.5
ShadowKyogre commented on 2013-06-04 15:01
@cmsigler: Updated the PKGBUILD to consider the binaries that were left floating in the sbin directory and updated the source URL to point to a working source.
cmsigler commented on 2013-06-04 12:59
Hi,
After the update to ver. 1.5.4.4-14, on my system there are still a few binaries in /sbin/ directory:
sbin/
sbin/fbcondecor_ctl.static
sbin/fbcondecor_helper
sbin/fbsplashctl
sbin/fbsplashd.static
sbin/splash-functions.sh
sbin/splash_util.static
Also, I used the following as the source URL:{pkgver}.tar.bz2
As always, HTH.
Clemmitt
UnsolvedCypher commented on 2013-06-03 19:58
It looks like the source is down or has been moved.
ShadowKyogre commented on 2013-06-03 19:10
Updated to consider /usr/bin move.
FoolEcho commented on 2013-06-03 15:53
Could you update to take care about the /usr/bin/move ?
Thanks.
Jristz commented on 2013-05-01 08:51
my issue is that miscsplashutils is a 404 in they downloas source making impossible to build this one
and the daemon thai is intalled fbcondecor.daemon is for initscript and Arch support systemd
Primoz commented on 2013-03-27 00:32
Hi I get this message:
line 32: autoreconf: command not found
I have no idea what I have to install or do to get that to work. Help please.
emorkay commented on 2013-01-12 18:10
Hi,
My issue with prograss bar is not resolved with this update. I am suspecting it is the problem with fbsplash-extras package.Is it? Because when I execute 'mkinitcpio -p linux', it is producing the following error:
/etc/rc.d/functions.d/fbsplash-extras.sh: line 370: /etc/rc.conf: No such file or directory
Thanks,
cmsigler commented on 2013-01-10 15:29
@ShadowKyogre: Thanks for updating. fbsplash 1.5.4.4-13 WFM. Cheers. Clemmitt
cmsigler commented on 2013-01-05 13:36
@ShadowKyogre: Apologies for the delay -- Real Life issues :(
I'm not having any problems -- it just WFM :/ I get a working count-down progress bar on the hibernation splash screen (first it says "Snapshot" then "Saving xxxxxx pages, press Esc to abort" or something like that with progress bar counting down from top to bottom). It works just like it did before mkinitcpio-0.12.0-2 *shrug*
I think I'm just doing something simple, running s2disk from root command line to hibernate, then rebooting with kernel command line option resume=UUID=blahblah-blah-blah-blah-blahblah to do a fast resume without any splash screen on reboot. All this works fine. Please note than I'm not running in a VM but on bare iron on an x86_64 laptop.
One problem I ran into: I'm not able to build fbsplash 1.5.4.4-12 as-is from AUR. I get this error from autoreconf:
parallel-tests: error: required file './test-driver' not found
parallel-tests: 'automake --add-missing' can install 'test-driver'
autoreconf: automake failed with exit status: 1
I've been able to fix this by changing 'autoreconf' to 'autoreconf -i' in PKGBUILD. HTH.
One other thing I noticed: In fbsplash.initcpio_install, the function add_runscript doesn't appear to take an argument (although it doesn't throw an error or warning, either). mkinitcpio seems to require that the name of the hooks script be identical to the name of the hook itself, i.e., 'fbsplash'. Since this is already true, there's no immediate problem. Again, HTH.
Clemmitt
ShadowKyogre commented on 2013-01-04 00:39
@emorkay: Had a look at fbsplash-basic.sh and I think that's what needs to be fixed. Since the script is written with the old initscripts in mind, I think we need to add something that hooks into similar events in systemd.
ShadowKyogre commented on 2013-01-04 00:24
@emorkay: I'm aware of that, as I made that comment on here before I uploaded the package. On the guest machine I was testing fbsplash on, I wasn't using the proprietary drivers. I'll take a look to see what else is preventing the progress bar from working right.
emorkay commented on 2013-01-03 18:41
Hi,
The progress bar isn't moving at all. Also, the only message I see is 'Initializing the kernel'.
Using arch with full systemd. I have ATI card and using proprietary drivers.
Thanks,
ShadowKyogre commented on 2013-01-03 14:28
@sausageandeggs: I'm not on testing on the virtual computer or on my computer and was able to build it. However, I do have a live CD on here with the kernel from testing along with grsecurity patches. Let me see if compiling it from there generates problems.
Anonymous comment on 2013-01-03 05:02
Not sure if its my box or not but the only way I could get this to compile was by adding the "-i" option to "autoreconf" cmd, otherwise it kept giving error
parallel-tests: error: required file './test-driver' not found
parallel-tests: 'automake --add-missing' can install 'test-driver'
autoreconf: automake failed with exit status: 1
Also, adding " sed -ie 's|INCLUDES|ACLOCAL_AMFLAGS|g' src/Makefile.am " before running "autoreconf [-i]" gets rid of warning " src/Makefile.am:53: warning: 'INCLUDES' is the old name for 'AM_CPPFLAGS' (or '*_CPPFLAGS') "
I'm running testing btw
ShadowKyogre commented on 2013-01-03 01:04
@cmsigler: Tested the change suggested in the test virtual computer. The progress bar doesn't move anywhere, but at least it's better than nothing.
cmsigler commented on 2013-01-02 16:20
With the upgrade to mkinitcpio-0.12.0-2 last month (December 2012), the deprecated SCRIPT keyword is now ignored. I believe the fix for fbsplash is simple; I commented out the SCRIPT line in /usr/lib/initcpio/install/fbsplash and in the next line added a call to add_runscript. HTH. Clemmitt
ShadowKyogre commented on 2012-12-17 07:00
Finally on winter break and installed fbsplash on a fresh virtual machine (desktop machine uses Plymouth, both are on systemd). Fbsplash doesn't seem to appear at all (the screen shows the splash screen, then it just shows the login screen).
ShadowKyogre commented on 2012-11-17 20:24
@wassup: I'll look into it as soon as I start winter break, since there's about three weeks left of the semester at my university. During that time, I'll also be investigating what's mentioned here:.
Hopefully my mother doesn't try to drag me off during the winter break. She has a habit of doing that whenever I'm free from school ><.
wassup commented on 2012-10-23 13:25
@cyberpatrol: I'll miss you. Did you change the distro itself as well?
@current-team: How about fixing the bug relating to the inability of applying the splash to all ttys?
Anonymous comment on 2012-09-30 13:54
Due.
Anonymous comment on 2012-09-14 09:33
@wassup: It doesn't seem to be an upstream bug, since it only doesn't work in the fbcondecor initscript, but in rc.local which is executed directly after the DAEMONS array. And if the verbose splash is set manually on the console it works, too. So I'm pretty sure it has something to do with the scripts. But there are still some more issues due to the many changes made in initscripts and the infiltration of initscripts by this systemd(-tools) crap (yes, I've tested systemd meanwhile) recently.
wassup commented on 2012-09-13 23:33
@cyberpatrol: I just came here to ask whether there is any improvement on this matter or if that is an upstream bug. Happily, I see there is some development - good work. I hope you will eventually trace down the scripts bug.
Anonymous comment on 2012-09-08 00:55
It took a long time, but I found at least a workaround for the FBIOCONDECOR_SETCFG bug.
Remove fbcondecor from DAEMONS and add these lines to /etc/rc.local:
. /etc/conf.d/fbcondecor
. /sbin/splash-functions.sh
for tty in ${SPLASH_TTYS}; do
fbcondecor_set_theme ${SPLASH_THEME} ${tty}
done
I guess the bug is somewhere in the scripts, since rc.local is run directly after the daemons, and adding a sleep to the fbcondecor initscript doesn't help.
Anonymous comment on 2012-07-15 20:56
@nvlplx: If you had updated fbsplash before glibc, there shouldn't have been any files from fbsplash in /lib anymore. But this way worked, too, of course.
Anonymous comment on 2012-07-15 20:53
@cyberpatrol : you're right. But Glibc refused to upgrade because of the presence of files from fbsplash in /lib. So finally I've remove fbsplash and dependances, then upgrade glibc and reinstall fbsplash.
Now; fbsplash compile without problem.
Thanks
Anonymous comment on 2012-07-15 14:20
@nvlplx: librt is part of glibc. So I guess there's an issue with your glibc installation. This package compiles with glibc 2.16.0-1, which used /lib, and with glibc 2.16.0-2, which uses /usr/lib. So this move actually can't be the reason. I'd suggest first updating your system as described in the News on the Arch Linux homepage, if you haven't done it, yet, or to reinstall glibc.
Anonymous comment on 2012-07-15 12:41
I can't compile fbsplash because of this error :
checking for clock_gettime in -lrt... no
configure: error: 'librt' library was not found.
I don' found any librt package or something like that.
If anyone has an idea ?
Anonymous comment on 2012-07-14 07:49
ok, removing fbsplash-extras fixed the progress bar issue. re-installing it brought the issue back, so moving my comments to fbsplash-extras package.
Cheers.
Anonymous comment on 2012-07-14 06:43
@cyberpatrol, yes I have extras installed. Had a fully functioning fbsplash until this update. Config has not changed and only using a basic theme (literally a background and progress bar). Tried a mkinitcpio -p linux but this made no difference. Same issue on 3 machines - xbmc media centre intel video, main desktop nvidia binary blob, and touchscreen tablet ati radeon drivers.
Cheers.
Anonymous comment on 2012-07-14 00:45
@padfoot: Do you have fbsplash-extras installed? I haven't tried it, yet, with fbsplash-extras, and without it I can't reproduce it. The only issue besides the FBIOCONDECOR_SETCFG bug is that the screen gets black for a short time while cryptsetup opens the containers. But that's a bug in the initscripts. If this still exists after the next update I'll report it.
Anonymous comment on 2012-07-13 23:22
With this latest package, my progressbar no longer updates during boot. I am not getting any errors and my boot and the rest of the splash all work fine. I am guessing this is all to do with the migration from /lib to /usr/lib? Should this rectify itself once the migration is complete?
Cheers.
Anonymous comment on 2012-07-10 14:30
Changed it anyway. Both methods should actually work.
Anonymous comment on 2012-07-10 14:20
@Det: Btw., the directories /lib/splash/cache and /lib/splash/tmp shouldn't exist anymore when post_upgrade() is run, because these directories belong to the package and are removed by pacman before the install script is executed. So I guess something goes wrong with your interpreter when interpreting the install script.
Anonymous comment on 2012-07-10 14:01
@Det: When or how do you get this error? I can't reproduce it with bash. This is needed, because there have been direcotries (sys) and files created at runtime by fbsplash, which are not known and removed by pacman. So /lib/splash has to be removed manually.
Det commented on 2012-07-10 13:24
You should use quotes with:
if [ ! (")`ls -A /lib/splash`(") ]; then
rmdir /lib/splash
fi
to not produce:
/tmp/alpm_WT0vKG/.INSTALL: line 23: [: cache: unary operator expected
But why is this needed if the folders 'cache' and 'tmp' are created there anyway?
Anonymous comment on 2012-07-08 13:22
Fixed /usr/etc in splash-functions.sh.
/lib/splash gets removed after an upgrade from a previous version if it still exists and is empty.
/usr/lib/splash/sys now belongs to the package.
Anonymous comment on 2012-07-08 08:08
Hmm, appears to be broken now:
grep -n usr/etc/ $( find pkg/ -type f )
pkg/sbin/splash-functions.sh:104: if [ -x "/usr/etc/splash/${SPLASH_THEME}/scripts/${event}-pre" ]; then
pkg/sbin/splash-functions.sh:105: /usr/etc/splash/"${SPLASH_THEME}"/scripts/${event}-pre ${args}
pkg/sbin/splash-functions.sh:125: if [ -x "/usr/etc/splash/${SPLASH_THEME}/scripts/${event}-post" ]; then
pkg/sbin/splash-functions.sh:126: /usr/etc/splash/"${SPLASH_THEME}"/scripts/${event}-post ${args}
pkg/sbin/splash-functions.sh:153: [ -f /usr/etc/splash/splash ] && . /usr/etc/splash/splash
pkg/sbin/splash-functions.sh:154: [ -f /usr/etc/conf.d/splash ] && . /usr/etc/conf.d/splash
pkg/sbin/splash-functions.sh:155: [ -f /usr/etc/conf.d/fbcondecor ] && . /usr/etc/conf.d/fbcondecor
pkg/usr/bin/bootsplash2fbsplash:9:$path_bp = "/usr/etc/bootsplash/";
Anonymous comment on 2012-07-08 00:46
Moved from /lib to /usr/lib (hopefully), and (hopefully) fixed the command not found issue. The FBIOCONDECOR_SETCFG is still there, probably an upstream issue or an issue caused by the initscripts.
For people who use harddisk encryption the screen could get black while the harddisk gets unlocked. This is an issue with initscripts, since it was fixed with the last initscripts update and came back with the latest initscripts update. I'll wait for the next initscripts update.
Anonymous comment on 2012-06-29 01:20
@cesarramsan: Of course, that's a different issue. There are currently 3 or 4 issues with fbsplash. I'm still working on it. The replacement of the command should just fix the "command not found" error. But I don't know if systemd-vconsole-setup does the same as set_consolefont, because I couldn't find a documentation about it, yet. At least I don't see a difference, so I guess this issue is fixed. Still remain 2 or 3 issues.
cesarramsan commented on 2012-06-28 23:43
@cyberpatrol: I tried your fix of replacing the command set_consolefont on line 98 by the command /usr/lib/systemd/systemd-vconsole-setup but I still get the "FBIOCONDECOR_SETCFG failed, error code 22." errors.
Anonymous comment on 2012-06-25 22:09
@ShadowKyogre: Could you, please, edit the file /etc/rc.d/funcions.d/fbsplash-basic.sh on your system, and replace the command set_consolefont on line 98 by the command /usr/lib/systemd/systemd-vconsole-setup. I don't get this error message, but I know that you're not alone. So before just do a lot of useless updates on AUR, it would be nice, if you could test it this way.
There are some other bugs, I'm currently trying to fix.
ShadowKyogre commented on 2012-06-18 18:37
@cyberpatrol: I'm also getting the same issues as wassup along with the following two:
* "/etc/rc.d/functions.d/fbsplash-basic.sh: line 98: set_consolefont: command not found"
* Various "FBIOCONDECOR_SETCFG failed, error code 22." and "FBIOCONDECOR_SETSTATE failed, error code 22." even though the terminals have their backgrounds set after logging in to manually start the daemon.
Anonymous comment on 2012-06-14 18:54
This is most likely related to the latest update of initscripts and the switch to systemd-tools. I have to look into it, but it can take a while. I better don't say what I think of Lennart Poettering and his crap.
wassup commented on 2012-06-14 11:38?
wassup commented on 2012-06-14 11:31?
Anonymous comment on 2012-05-30 12:51
I guess the server was temporarily down. I can reach and download the source package from there.
Anonymous comment on 2012-05-28 13:20
To download "splashutils" change in PKGBUILD "" with "" it works
Anonymous comment on 2012-05-27 20:59
the server alanhaggai.org seems to be down. tried from both my home computer in Winnipeg and my VPS in Toronto
Anonymous comment on 2012-04-15 14:38
Thanks for the info. Changed the path and moved the hook from /lib/initcpio to /usr/lib/initcpio.
trya commented on 2012-04-15 13:00
'sleep' path is not '/bin/sleep' anymore, but '/usr/bin/sleep'. Can you correct fbsplash-basic.sh, please?
Anonymous comment on 2012-02-06 00:13
Don't forget to rebuild this package after the libpng update from [extra].
Anonymous comment on 2011-11-23 19:01
As far I understand, the .la files are necessary for static libraries (.a files). The static libraries are necessary in fbsplash for at least the early boot stage in the initrd. So, yes, the .la files seem to be necessary in fbsplash.
rafaelff commented on 2011-11-23 04:33
The *.la files (libtool related) in /usr/lib are needed in this package?
Anonymous comment on 2011-11-16 04:29
makedepends should include autoconf
Anonymous comment on 2011-11-16 04:28
makedepends should include autoconf
Anonymous comment on 2011-10-13 13:38
@trontonic: I know, but I'm not the upstream maintainer. And it's closing down in 2 1/2 months.
xyproto commented on 2011-10-13 11:36
berlios.de is closing down, fyi
Anonymous comment on 2011-10-11 21:02
I don't know systemd and I'm not sure if I want to get to know it. I guess all the scripts which are written by kujub would need to be rewritten for systemd. Kujub may correct me if I'm wrong.
misc commented on 2011-10-10 21:14
fbsplash doesn't (yet) properly work with systemd, does it?
Anonymous comment on 2011-10-03 22:32
@windel: Please, read the AUR User Guidelines in the wiki. Link is on the AUR homepage.
windel commented on 2011-10-03 18:09
Thanks for the package!
Please add 'autoconf' to build depends.
Anonymous comment on 2011-09-07 14:43
Well, the fun is a point. But in this case I wouldn't be too sure that you can fix it depending on the reason for this issue.
Anonymous comment on 2011-09-07 14:19
But wheres the fun in that! Besides I only 'break' what I know I can fix
Anonymous comment on 2011-09-07 14:15
@sausageandeggs: Better don't try to break it if it's working for you.
Anonymous comment on 2011-09-07 13:46
@cyberpatrol Everything been working on (both) my setup now for a while (can't remember since which pkg), nothing about my setup has changed at all, I thought it was something you'd done. I'll have a fiddle and see if I can break it again!
Anonymous comment on 2011-09-07 11:27
@cyberpatrol: Works only if the splash daemon is able to open the keyboard event device. For that purpose, /sbin/splash-functions.sh splash_start() calls splash_set_event_dev() which tries to find the device node and, on success, sends its path-name to the daemon via FIFO. When starting the daemon early in the initcpio and splash_start_initcpio.patch isn't applied, the daemon seems to do the initial painting (fadein) first before reading and processing the "set event dev /dev/input/${t}" FIFO command. Because the painting takes time, the daemon appears to be unable to open the device node in case initcpio init already did switch_root at its end. (root filesystem where daemon was started is gone)
Anonymous comment on 2011-09-06 17:36
@kujub: Do you have a idea why it's not possible to switching back from verbose to silent splash after switching vice versa? I couldn't look at it, yet.
Anonymous comment on 2011-09-06 17:34
I'm coming back to a very old issue, which I couldn't reproduce. So I couldn't go deeper into it. Now that I replaced my PS/2 keyboard by a USB keyboard I can confirm this issue:
cat: can't open '/sys/class/input/input*/capabilities/ev' no such file or directory
/init: line 619: arithmetic syntax error
I made some tests now and found out that this is a udev and a timing issue. I have the impression that `udevadm settle` doesn't work correctly and doesn't wait for every device to be settled what it actually is supposed to do as far as I understand its manpage. I guess that's the same reason why I have or at least had some issues with the official Arch install CD.
Regarding fbsplash at least a workaround, which works for me, is to move fbsplash to the end of the HOOKS array in /etc/mkinitcpio.conf and to add, of course, the hooks udev and usbinput to the HOOKS array. The HOOKS array shoud look like this:
HOOKS="base udev autodetect usbinput ... fbsplash"
The only difference I could determine is that switching from silent to verbose splash is done by pressing Alt-F1 during the kernel initialization and after this by pressing F2.
@ShadowKyogre and sausageandeggs: Could you, please, test this?
Anonymous comment on 2011-09-05 10:41
@xaer0knight: Possible reasons could be that you either have missing x rights for your /tmp directory resp. the related subdirectories or you have mounted /tmp with the noexec option.
Anonymous comment on 2011-09-05 08:29
@xaer0knight: I can't reproduce it, neither with pure makepkg nor with yaourt. I guess you either have wrong file permissions for your /tmp directory or the related subdirectories or there's an issue with your yaourt installation.
Anonymous comment on 2011-09-05 05:51
I have encountered this error since Oct/Nov 2011:
==> Starting build()...
/tmp/yaourt-tmp-xaer0/aur-fbsplash/./PKGBUILD: line 49: ./configure: Permission denied
==> ERROR: A failure occurred in build().
Aborting...
==> ERROR: Makepkg was unable to build fbsplash.
Anonymous comment on 2011-08-12 21:11
Had to remove some unnecessary sed commands which I added just for testing purposes.
Maxr commented on 2011-08-12 05:56
works. Thanks!
Anonymous comment on 2011-08-11 23:37
Well, kozzi already mentioned that adding export LIBS="-lbz2" shall fix the issue. When I tried this after he mentioned it it didn't work for me. Now it is working. I don't know why. Nevertheless I've adopted it.
Tell me, if it doesn't work for you.
Anonymous comment on 2011-08-11 23:30
Adopted. Thanks.
Anonymous comment on 2011-08-11 22:54
Guys, I found solution!
Just edit PKGBUILD file and append LIBS="-lbz2" before ./configure line, e.g.:
LIBS="-lbz2" ./configure --prefix=/usr --sysconfdir=/etc --without-klibc --enable-fbcondecor --with-gpm --with-mng --with-png --with-ttf --with-ttf-kernel
It works well for me
Anonymous comment on 2011-08-11 19:09
@Maxr: If you disable ttf support you won't see any text messages anymore like "Press F2 for verbose screen", "Initializing kernel", textboxes, etc.
Maxr commented on 2011-08-11 19:00
disabling ttf in configure will let you build the package at least. Don't know wether some themes won't work then, tough. Didn't test that yet.
Anonymous comment on 2011-08-09 13:28
@Shanto: If you have read my comment you would know that this is an upstream bug which is already filed to upstream. Now we have to wait until a new upstream release. And, no, this hasn't anything to do with the kernel.
Anonymous comment on 2011-08-09 13:27
Please, stop flagging this package as out-of-date as long as there's no NEW upstream release!
Shanto commented on 2011-08-09 05:55
Seems like fbsplash and miscsplashutils needs some work after linux (formerly kernel26) hits 3.0 in the official repo.
Anonymous comment on 2011-08-01 23:32
fbsplash currently doesn't compile due to an upstream bug either in fbsplash due to a change in freetype2 or in freetype2. Adding "export LIBS='-lbz2'" doesn't work.
Fbsplash upstream is already contacted.
Det commented on 2011-07-14 17:44
I do know [testing] is not a stable repo, my friend :D. What the name implies is kinda obvious.
Anonymous comment on 2011-07-14 12:12
@Det: You'd better read the mailing lists. Then you'd know that [testing] is not a stable repo. See e.g. all those e-mails regarding non-bootable systems after a kernel update in [testing], etc. No, it was not my fault, it was [testing] which has caused my broken system. [testing] is what it says: it's for testing purposes and not for production systems.
Det commented on 2011-07-14 06:20
That's also what the not-as-lazy-as-I-am people are for. And I don't doubt you've broken your system in the past. It's not a child distribution.
Anonymous comment on 2011-07-13 21:32
If you know that this is a freetype2 bug, why don't you file a bug report? That's what [testing] is for.
And if you're really using [testing] then you should know what you're doing and how to edit the PKGBUILD by yourself. I'm not supporting [testing] because it can break your system pretty easily. And I know what I'm talking about.
Det commented on 2011-07-13 19:53
No, because it _is_. But this "LIBS='-lbz2'" trick didn't work for me.
E: D'oh, it has to be "export LIBS='-lbz2'", so something like this would fix this:
[ `pacman -Q freetype2 | cut -d " " -f2` > 2.4.4 ] && export LIBS='-lbz2'
It's a bit ugly as it is and if you don't even like (supporting) [testing] then that doesn't make things look good.
Det commented on 2011-07-13 19:44
No, because it _is_. But this "LIBS='-lbz2'" trick didn't work for me.
Anonymous comment on 2011-07-13 18:53
@kozzi and Det: Are you sure that this is not a bug in the freetype2 package from [testing]?
kozzi commented on 2011-07-13 17:08
@Det: how i already write, you must add LIBS='-lbz2' before configure in PKGBUILD
Anonymous comment on 2011-07-13 14:38
@cyberpatrol: Done. :D
Anonymous comment on 2011-07-13 13:04
@kujub: No objections, patch applied. But don't forget to make these changes in fbsplash-extras, too. ;-)
Anonymous comment on 2011-07-13 08:52
@cyberpatrol:
I made a patch for fixing the initcpio hook to use the new /run/ tmpfs now instead of abusing /dev/ for the early daemon start.
Please apply if no objections.
Anonymous comment on 2011-07-13 01:32
@Det: Which freetype version do you have installed? And do you use [core]/[extra] or [testing]?
Det commented on 2011-07-13 01:16
Is it just me?:
/usr/lib/gcc/x86_64-unknown-linux-gnu/4.6.1/../../../../lib/libfreetype.a(ftbzip2.o): In function `FT_Stream_OpenBzip2':
(.text+0x5fe): undefined reference to `BZ2_bzDecompressInit'
collect2: ld returned 1 exit status
Anonymous comment on 2011-07-12 17:55
@cyberpatrol:
grep 'err.*()' /lib/initcpio/*functions
/lib/initcpio/functions:error() {
/lib/initcpio/init_functions:err () {
So please revert 'error' to 'err' in run_hook() as installing hooks and running hooks are two totally different things.
Anonymous comment on 2011-07-12 16:45
Replaced err by error.
Anonymous comment on 2011-07-12 16:26
@falconindy: The err function still exists in mkinitcpio 0.7.x in the file /lib/initcpio/init_functions and is still used by the hooks net and encrypt of mkinitcpio 0.7.x.
falconindy commented on 2011-07-12 15:56
The 'err' function no longer exists in mkinitcpio 0.7.x as I also pointed out here:
If you're not seeing the error regarding the missing command, then you're obviously never hitting the block of code that triggers it. Please update to use the new mkinitcpio API (error, not err).
Anonymous comment on 2011-07-12 11:18
@rubyinthedust: I have the same HOOKS array so far and I also upgraded to the new mkinitcpio version, but I only got the message about the deprecated function call (that was due to a change in the new mkinitcpio version) but not the "command not found" error. And the function err is also used by other hooks from the mkinitcpio package. So you must get this error message by the other hooks, too, if you have one of them in your HOOKS array. Maybe try to temporarily add the hooks net and encrypt to your HOOKS array if you don't have them anyway and see if you get the error for these hooks, too. Btw., I have the encrypt hook in my HOOKS and don't get this error message.
Anonymous comment on 2011-07-12 06:42
HOOKS="base fbsplash udev autodetect..."
There was a mkinitcpio update just the other day, but it's strange that it is only in the fbsplash hook. I'll ask around in the forum.
Anonymous comment on 2011-07-11 22:31
@kozzi: Sorry, but I don't support [testing] for good reasons. If you're using [testing] you should know what you're doing and be prepared for serious breakages. [testing] should only be used for testing purposes and not on production systems. That's what [testing] is meant for.
Nevertheless it would be nice, if you would tell me, which package from [testing] causes this build failure, and if you would post the error messages you get.
kozzi commented on 2011-07-11 18:01
Doesnt build on testing, please add LIBS="-lbz2" before ./configure
Anonymous comment on 2011-07-11 10:39
Fixed the function call. Thanks.
But I can't reproduce the other error "command not found". Err is a function in /lib/initcpio/init_functions which is part of the package mkinitcpio and is added to the initrd by the base hook. Do you have the hook base at the first position of the HOOKS array in /etc/mkinitcpio.conf?
Anonymous comment on 2011-07-11 08:02
just updated and get errors while generating hooks with mkinitcpio -p kernel26
==> WARNING: Hook 'fbsplash' uses a deprecated 'install' function. This should be renamed 'build'
/lib/initcpio/install/fbsplash: line 90: err: command not found
Det commented on 2011-06-22 12:08
Just responding to a apparently over 2 months old comment.
Anonymous comment on 2011-06-22 10:52
@kujub: Thanks. Patch applied.
Anonymous comment on 2011-06-22 09:06
Patch for removing obsolete workaround code for FS#10536:
Anonymous comment on 2011-06-14 12:21
@Det: What do you want to tell us? And what does this have to do with fbsplash?
Det commented on 2011-06-14 12:13
Reinstallation shouldn't be needed anyway since we can always chroot o_O:
And even if our _Pacman_ didn't work we could still manually extract our required package(s) from Live-CD and put them in place.
graysky commented on 2011-06-13 22:56
I get the following error when I rebuild the initrd.
# mkinitcpio -p kernel26-ck
==> Building image "default"
==> Running command: /sbin/mkinitcpio -k 2.6.39-ck -c /etc/mkinitcpio.conf -g /boot/kernel26-ck.img
:: Begin build
:: Parsing hook [base]
:: Parsing hook [fbsplash]
grep: /lib/initcpio/install/dev: No such file or directory
FATAL: Hook 'dev' can not be found.
==> FAIL
==> Building image "fallback"
==> Running command: /sbin/mkinitcpio -k 2.6.39-ck -c /etc/mkinitcpio.conf -g /boot/kernel26-ck-fallback.img -S autodetect
:: Begin build
:: Parsing hook [base]
:: Parsing hook [fbsplash]
grep: /lib/initcpio/install/dev: No such file or directory
FATAL: Hook 'dev' can not be found.
==> FAIL
Anonymous comment on 2011-06-13 21:57
@artiom: Do you still get the error with the new version?
artiom commented on 2011-05-17 09:26
I have this error every time on boot.
Anonymous comment on 2011-05-05 12:27
@artiom: I can't reproduce this. Do you have these errors always or only once?
I can find a similar error message in my logs but for plugin-container resp. libflashplayer.so and only once. So I guess that something went wrong only once.
artiom commented on 2011-05-02 14:54
fbsplash stop daemon logs this error in the system log:
fbsplashd.stati[2480]: segfault at 7facdec99010 ip 000000000052398b sp 00007facdcbd6c88 error 4 in fbsplashctl[400000+1e4000]
Anonymous comment on 2011-04-08 10:40
@sharsma: Reinstallation of the complete system wasn't necessary. Just a `nano /etc/makepkg.conf` as root and probably a `pacman -S base-devel`.
Anonymous comment on 2011-04-08 10:00
@cyberpatrol: i reinstall arch x86_64 and select the (base-devel) and everything is ok, thanks!
Anonymous comment on 2011-04-08 08:22
@sharsma: Have you looked at your /etc/makepkg.conf?
The first error message is:
make[3]: O2: command not found
This is most likely just a consequential error resulting from the first:
... jcapimin.o: file or directory not found
As O2 is a CFLAG, you most likely have a typo, a missing "-" in your /etc/makepkg.conf, so that O2 is not interpreted as a flag, but sa a command, which, of course, can't be found.
Anonymous comment on 2011-04-08 08:16
can not build under x86_64, the error message likes:
make[3]: O2: command not found
... jcapimin.o: file or directory not found
may have problems with libjpeg?
Anonymous comment on 2011-04-08 08:10
@sharma: You probably don't have (base-devel) installed or have a typo in your /etc/makepkg.conf.
Is it possible that you've edited your makepkg.conf and removed a "-"?
CFLAGS="-march=x86-64 -mtune=generic O2 -pipe"
CXXFLAGS="-march=x86-64 -mtune=generic O2 -pipe"
instead of
CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe"
CXXFLAGS="-march=x86-64 -mtune=generic -O2 -pipe"
Anonymous comment on 2011-04-08 07:55
can not build under x86_64, the error message likes:
make[3]: O2: command not found
... jcapimin.o: file or directory not found
may have problems with libjpeg?
Anonymous comment on 2011-04-03 11:36
No It doesn't work with or without the patch (unless i comment the for loop, -35 does without the patch, but I'm pretty sure it just because of my terminally slowly mounting disk.
Anonymous comment on 2011-04-03 10:24
@sausageandeggs: Doesn't it work with the current version, without the patch and with the later function call, too?
Anonymous comment on 2011-04-03 09:58
@cyberpatrol It seems to be a timing issue like you said, it's the for loop in splash_set_event_dev that throws the error. Running these cmds myself always returns a value for $t. I'm not sure but I think it's happening because the partition that holds sysfs is ext4 which isn't mounting quick enough and ..... etc etc. Commenting out the for loop and hard coding a value for $t if it's empty fixes the problem.(not ideal I know but my keyboard is always the same and otherwise it doesn't seem to change). Thanks for the pointer s anyway, if I do find that its a different reason I'll let you know.
Anonymous comment on 2011-04-01 01:18
@sausageandeggs: You're welcome. I guess it's a timing issue. I guess your input device isn't settled if this function is called earlier. So the devices aren't in sysfs at that time, but probably a millisecond later.
Just as a matter of interest you could have a look at /sbin/splash-functions.sh. There you find the function splash_set_event_dev(), and in this function you see some if clauses and a for loop. Maybe you could execute the commands in these if clauses (grep ..., cat ..., echo ...) one after another and look which condition is met for your system.
At least when the patch was applied, and the function was called earlier the if clause in the for loop was caught. My assumption is that there haven't been the directories /sys/class/input/input*. So the for loop couldn't expand the wildcard and $i didn't get the file names but the path as a string. So in the next if clause cat didn't give an integer value but a string. Thus you had the arithmetic syntax error.
Anonymous comment on 2011-04-01 01:02
Thanks for your help cyberpatrol, when I get time, hopefully at the weekend, I'll take a look at it. If I find anything I'll let you know.
Anonymous comment on 2011-03-31 08:54
Thanks, sausageandeggs.
I personally can't find any differences between fbsplash with or without this patch. So I didn't remove the patch completely, but commented it. If somebody has issues with or concerns about this, please, write a comment.
Anonymous comment on 2011-03-31 02:50
I've rebuilt the pkg and made sure that fbsplash was second (I had it 3rd) and the error has reoccurred. I'll just leave that patch out for now.
Anonymous comment on 2011-03-30 06:31
Yes the msg was just a one liner with the wildcard
can't open '/sys/class/input/input*/capabilities/ev' no such file or directory
/init: line 619: arithmetic syntax error
I'm pretty sure that I have fbsplash second in the hooks array, it's definitely early anyway, No access at the moment but I'll check/rebuild later on.
Anonymous comment on 2011-03-30 06:01
@ShadowKyogre and sausageandeggs: Yet another question: Have you added fbsplash to the HOOKS array in /etc/mkinitcpio.conf? If yes, could you, please, set base at the first and fbsplash at the second position in the HOOKS array (HOOKS="base fbsplash ..."), reinstall fbsplash with the patch (uncomment the line with the patch in the PKGBUILD again), rebuild the initrd and test it again?
Anonymous comment on 2011-03-30 05:50
@sausageandeggs: It's not too easy to find out and admittedly it was just a wild guess to find out where to look for the reason.
@ShadowKyogre and sausageandeggs: Another question: Was the error message really only a one-liner with the wildcard in it like the following?
cat: can't open '/sys/class/input/input*/capabilities/ev' no such file or directory
Or have there been several lines with the expanded wildcard like the following?
cat: can't open '/sys/class/input/input0/capabilities/ev' no such file or directory
cat: can't open '/sys/class/input/input1/capabilities/ev' no such file or directory
cat: can't open '/sys/class/input/input2/capabilities/ev' no such file or directory
etc.
I'm currently trying to find out if this is indeed a bug in the script or if the function call was done too early for your systems with kujub's patch.
Anonymous comment on 2011-03-30 02:14
Yes the rebuilt pkg without the patch works fine, F2 switching also works. Thanks Cyebrpatrol, I really can't believe that that didn't occour to me!
Anonymous comment on 2011-03-29 18:20
@ShadowKyogre and sausageandeggs: If the modified package works and boots correctly, please, also test, if you can switch between the silent and the verbose splash and vice versa with F2.
Anonymous comment on 2011-03-29 18:10
@ShadowKyogre and sausageandeggs: Could you, please, rebuild and reinstall this package. But before doing this, please, comment the following line in the PKGBUILD:
patch -Np2 -i ${srcdir}/splash_start_initcpio.patch
So that it looks like this:
#patch -Np2 -i ${srcdir}/splash_start_initcpio.patch
I guess that your issue is either caused by kujubs new patch or it's an upstream bug probably caused by one of the two latest bash updates.
Anonymous comment on 2011-03-23 16:34
I to have this problem, it started around the same time as the *-35 update but I've downgraded to *-34 and it's still the same.
I'm using bash (only shell installed),evdev is there
ShadowKyogre commented on 2011-03-23 16:02
@cyberpatrol: Ah, okay. Output:
--
evdev 7179 5
--
Anonymous comment on 2011-03-23 15:07
@ShadowKyogre: Of course you mustn't enter the `.
ShadowKyogre commented on 2011-03-23 14:50
.
@kujub: Yes, it still boots. It just can't resume from hibernate if I select the silent option for some odd reason.
ShadowKyogre commented on 2011-03-23 14:48
.
Anonymous comment on 2011-03-23 10:48
@ShadowKyogre: Before rebuiling the initcpio, please, run `lsmod | grep evdev`. I don't need to explicitly add evdev to MODULES, neither in /etc/rc.conf nor in /etc/mkinitcpio.conf.
Have you updated your system particularly kernel26-fbcondecor and fbsplash to the latest versions?
And which shell are you using? Have you bash installed?
What I'm wondering is "can't open '/sys/class/input/input*/capabilities/ev' no such file or directory". Of course there's no directory .../input*/... The * is a wildcard which should be substituted by bash.
And I'm wondering about "/init: line 619: arithmetic syntax error". I guess your shell interprets the * probably as an arithmetical operator.
Anonymous comment on 2011-03-23 08:57
@ShadowKyogre: Am I right thinking your system still boots with silent splash enabled? So this would be not a big problem since you don't see that message normally. :) To fix it anyways, please add evdev to MODULES in your mkinitcpio configuration, rebuild your initcpio and try again.
Anonymous comment on 2011-03-23 00:38
@kujub: Is it possible that this error message is related to your latest changes regarding evdev?
ShadowKyogre commented on 2011-03-22 23:36
@kujub: When I installed fbsplash recently, I get this error whenever I tell the kernel to silently boot:
--
cat: can't open '/sys/class/input/input*/capabilities/ev' no such file or directory
/init: line 619: arithmetic syntax error
--
I'm not sure what's causing this, but it only happens when I include the hook in my initramfs.
Anonymous comment on 2011-03-07 15:13
@kujub: Updated. Thanks for the patch,
Anonymous comment on 2011-03-07 12:22
@cyberpatrol: I made a patch for a new pkgrel 35:
This is a cleanup of the scripts and also contains the following fixes:
- Fix procfs missing when calling splash_setup in case of boot w/o initcpio (custom kernels).
- Also find and include referenced files (images) when including the entire theme into initcpio.
- Also allow daemon start in initcpio with scripted themes (like arch-banner-icons with fbsplash-extras>=2.0.10)
- Use evdev if available to allow changing back to the splash screen when using F2-key.
Anonymous comment on 2011-01-10 23:12
@m0nhawk: Why should that be fbsplash 2? There is no fbsplash 2, yet. As you saw on upstream's page this is the latest fbsplash version. Btw., splash-utils-gentoo is Gentoo specific.
dsadcsadsadacasd commented on 2011-01-10 13:04
From official page:
"13 Nov 2008: splashutils-1.5.4.3 and splashutils-gentoo-1.0.16 released."
Is it real fbsplash 2 yo?
Anonymous comment on 2010-10-17 02:14
Just a PKGBUILD cleanup.
Anonymous comment on 2010-10-10 16:05
I have installed fbsplah and fbcondecor.
It works well if I use the themes "arch-black" and "arch-banner".
However, when I change my own picture, it fails to load either silent or verbose mode.
I've wonder whether the size of my picture is too large, but I replaced a small one, either.
Anyone knows?
Anonymous comment on 2010-08-07 12:17
Unknown didn't mention, yet, which bootloader he is using. And how shall GRUB2 be able to preventing a framebuffer device?
I guess it has something to do with a system configuration, probably kernel line or whatever. But I'm not a framebuffer expert. So I'd suggest asking for help with setting up a framebuffer device in the forums or the mailing lists.
Anonymous comment on 2010-08-07 10:26
It seems to me your problem has nothing to do with fbsplash at all since you are talking about GRUB2 here.
unknown commented on 2010-08-06 20:35
@cyberpatrol
ok, it seems that vga= is needed. I replaced it with gfxpayload= as vga= is depreciated which broke things.
Still cant run a demo though due to the missing framebuffer device.
Anonymous comment on 2010-08-06 19:24
@unknown: As I said this hasn't anything to do with fbsplash, because this package hasn't changed anything except for merging two packages to one. The software is exactly the same and fbsplash is not concerned with setting up a framebuffer device.
If you don't have a framebuffer device, you can look at your kernel line in /boot/grub/menu.lst or in lilo.conf and check if you have the necessary parameters in there particularly vga= or video=. Please read the Wiki page again for details. Or check if you have "fbsplash" behind "udev" in the HOOKS array in /etc/mkinitcpio.conf.
Otherwise you should ask in the forums or the mailing lists for getting help with setting up the framebuffer device.
unknown commented on 2010-08-06 19:12
@cyberpatrol
I did follow the guide. And as i said, something broke after an update.
Afaik, releases <.31 work fine. After that something broke.
I'm sorry, I.
Anonymous comment on 2010-08-06 18:26
@unknown: Btw., the latest fbsplash update didn't change anything except for the merging of the daemon with the scripts into one package instead of two and one additional option for another login manager.
Anonymous comment on 2010-08-06 18:22
@unknown: fbsplash is first complaining about a missing framebuffer device and insufficient permissions:
> open("/dev/tty0", O_RDWR) = -1 EACCES (Permission denied) <-- fbsplashd is not run as root and has not enough permissions.
> open("/dev/fb0", O_RDWR) = -1 ENOENT (No such file or directory) <-- There's no framebuffer device.
> open("/dev/fb/0", O_RDWR) = -1 ENOENT (No such file or directory) <-- There's no framebuffer device.
> open("//etc/splash/arch-black/0x0.cfg", O_RDONLY) = -1 ENOENT (No such file or directory) <-- It, of course, can't find this file, because there's no theme for the screen resolution 0x0.
Please read the fbsplash wiki and configure your system accordingly.
Don't forget to create a new initrd image after you changed your configuration, so run `mkinitcpio -p <kernel_name>`.
unknown commented on 2010-08-06 17:41
@kujub
fbsplash is complaining about a missing theme, not a missing framebuffer device.
Why isn't there such a framebuffer device, and why did things break with a recent fbsplash update?
Anonymous comment on 2010-08-06 17:31
@unknown: And you need to start fbsplashd as root.
unknown commented on 2010-08-06 17:17
@kujub
please *read* the strace. the problem is not with a missing framebuffer, but with te fact that it cant read the theme.
Anonymous comment on 2010-07-24 11:16
@unknown:
> open("/dev/tty0", O_RDWR) = -1 EACCES (Permission denied)
You have to be root to be able to start fbsplashd.
> open("/dev/fb0", O_RDWR) = -1 ENOENT (No such file or directory)
> open("/dev/fb/0", O_RDWR) = -1 ENOENT (No such file or directory)
> open("//etc/splash/arch-black/0x0.cfg", O_RDONLY) = -1 ENOENT (No such file or directory)
You need some sort of framebuffer ;-)
unknown commented on 2010-07-23 03:58
This version broke my fbsplash setup:
Failed to load theme 'arch-black'.
Strace from console:
Anonymous comment on 2010-07-08 20:09
Pacman asks for removal confirmation only with conflicts. That's why I added this one. ;-)
Anonymous comment on 2010-07-08 20:03
Hmm, I actually don't know - does pacman ask the user for removal confirmation with replaces too?
Anonymous comment on 2010-07-08 19:57
I'll add optdepends. But replaces isn't necessary, conflicts is sufficient.
Anonymous comment on 2010-07-08 19:21
OK, well done and very good timing too :) I already submitted a new package 'fbsplash-extras' replacing 'initscripts-extras-fbsplash'. Could you change your optdepends and maybe add a replaces=('fbsplash-scripts')?
Anonymous comment on 2010-07-08 17:26
Merged fbsplash-scripts to fbsplash. I set conflicts=('fbsplash-scripts' 'initscripts-extras-fbsplash') to avoid pacman errors due to the dupes. So fbsplash-scripts and initscripts-extras-fbsplash are deinstalled during the update, but the config files are backed up.
@kujub: I'd suggest not to move initscripts-extras-fbsplash to fbsplash-scripts but to a new package fbsplash-extras, fbsplash-extra-scripts or the like and let fbsplash-scripts and initscripts-extras-fbsplash be removed from AUR. This would clarify that this package only adds optional extra functionality to fbsplash and makes the update more convenient for the users. I also added two options to /etc/conf.d/splash for lxdm and slim users. If you need to make any changes at the scripts just send me the updated scripts.
Anonymous comment on 2010-06-24 17:40
cyberpatrol: Now 'initscripts-extras-fbsplash' no longer depends on 'fbsplash-scripts'. Both contain the same configuration and fbcondecor script files now. If you want to reduce the amount of packages to be installed further, you could merge 'fbsplash-scripts' into 'fbsplash' (after waiting for any bug reports for a while). Then I could drop the dupes from 'initscripts-extras-fbsplash' and move/rename the remaining improved scripts to 'fbsplash-scripts'.
Anonymous comment on 2010-06-21 20:24
Hi, tpavlic, as splash_cache_cleanup in /sbin/splash-functions.sh would create that directory and additionally redirects mount --move errors to /dev/null, I guess you are actually using 'initscripts-extras-fbsplash' which provides its own splash_cache_cleanup function. ;-) For solving your problem it should be enough to reinstall the 'fbsplash' package, since the directory should be in there (according to the last line of the PKGBUILD).
tpavlic commented on 2010-06-21 20:01
Lately, I've noticed a "/lib/cache/tmp mountpoint doesn't exist" (or something along those lines) after rc.multi finishes loading deamons. I also notice that
cachedir on /lib/splash/cache type tmpfs (rw,relatime,size=4096k,mode=644)
remains mounted (it contains the "arch-banner-icons" folder from my rotating arch-banner animated splash screen). I believe this cache is suppoesd to be removed after the splash screen finishes, and so I think there is a bug in the cache_cleanup code in /sbin/splash-functions.sh script.
Anonymous comment on 2010-06-07 19:21
Sorry for not responding for such a long time, but I hadn't had time to look at it.
Generally I agree, but at first glance I think it would be better to also include the initscripts-extras-fbsplash, because I think one package should provide every feature. Features should be enabled or disabled just by config files and not by installing or uninstalling separate packages in spite of what the devs say about the "long" scripts. I'll think about it.
Currently it's working as it is. That's essential.
Anonymous comment on 2010-06-07 18:33
Thank you KaoDome, but there are new versions of initscripts, mkinitcpio, fbsplash-scripts and initscripts-extras-fbsplash on the way now. So if cyberpatrol would agree, he could wait until some days beyond the new packages hit core and AUR to save him some work. Hopefully the mkinitcpio stuff wont change very often in the future. :)
KaoDome commented on 2010-06-07 18:20
I agree with kujub, right now I'm installing fbsplash-scripts and initscripts-extras-fbsplash. Shouldn't be better to have only one scripts package?
Anonymous comment on 2010-03-28 10:26
cyberpatrol: May I ask you to consider a new PKGBUILD: ? That one would move in the config-file, the initcpio-hook and the fbcondecor-daemon-script from fbsplash-scripts.
Benefits:
* The fbsplash-scripts dependency could be dropped again from initscripts-extras-fbsplash. (Say: Two scripts for boot splash to choose.)
* Those who just want fbcondecor (verbose mode) could even install fbsplash without any further script package
* fbsplash could even be used for a silent mode splash without initscripts but with upstart or something (no progress though, but some animation maybe, unless someone contributes some additional script code)
That would mean keeping the number of packages to install as small as possible while giving the users any options I can imagine. :) What do you think?
The link "" is not working. | https://aur.archlinux.org/packages/fbsplash/?ID=13541&detail=1&comments=all | CC-MAIN-2018-17 | refinedweb | 9,510 | 64.71 |
> > > > > Incidentally, I believe this test case to be in error. The type of the > argument in the caller and the callee is different. In particular, the > size of the argument is static in the caller and variable in the callee. > Whether or not an argument has variable size affects calling > conventions, and thus affects how the callee should find the argument in > varargs. > > A correct test case would have the caller have a variable sized > structure as well. Hmmm ... well this testcase arrives from a real example ... forwarding code in some advanced OO stuff which worked perfectly well with all GCC releases up to now (code from mframe.m in gnustep-base). If that is wrong, other code is wrong ... and well variable size structs are much less useful ... are you meaning that if I have int size = 255; struct { char text[size]; } SMSmessage; then I can't pass the SMSmessage to a function accepting a struct {char text[255]; } as argument ? So I can use SMSmessages in place of a struct {char text[255];} everywhere I want, but I can't pass it in place of a struct {char text[255];} to a function ? This is very confusing for users I must say. And it is not how it worked in previous versions of the compiler. Anyway - if that does not longer work on GCC 3.1, at least let me know how do we make our code to work with GCC 3.1. I think you understood what the function in the testcase "wants/needs" to do - which is that it can be called with an arbitrary struct (fixed size) as the second argument, and it is informed of the size of the struct from the first argument (which is just to say, a source providing the info at runtime), and it needs to be able to access the bytes in the struct. This is needed to implement proper forwarding of methods when there are structs. I guess there might be other problems then - here is an excerpt of code needing to return a struct of a certain (fixed) size, which is only known when the function is called - case _C_ARY_B: case _C_STRUCT_B: case _C_UNION_B: { typedef struct { char val[size]; } block; inline block retframe_block(void *rframe) { __builtin_return (rframe); } *(block*)buffer = retframe_block(retframe); break; } This works with GCC < 3.1. If what you say is true, this will no longer work with GCC 3.1, because 'block' is a variable size struct, and can't replace a fixed size struct as the return type. So - question - since this stuff which worked fine on all previous compilers no longer works, and you say the code is broken - ok then how do we fix the code ? | http://lists.gnu.org/archive/html/gnustep-dev/2002-03/msg00050.html | CC-MAIN-2014-10 | refinedweb | 455 | 76.66 |
Patent application title: Data Storage and Processing Service
Inventors:
Amit Agarwal (Fremont, CA, US)
Michael Sheldon (Seattle, WA, US)
Andrew Kadatch (Redmond, WA, US)
IPC8 Class: AG06F1730FI
USPC Class:
707769
Class name:
Publication date: 2012-01-19
Patent application number: 20120016901
Abstract:
In general, the subject matter described in this specification can be
embodied in methods systems, and program products. A request to store
data is received. The data is stored as an object in a repository. A
request to create a table is received, where the request identifies a
name for the table. The table is created with the name. A request to
import the data into the table is received. The data is imported into the
table, where importing the data in the object into the table includes
converting the data in the object into columnar stripes, and storing the
columnar stripes in association with the table. A request to perform a
query on the table is received, where the request includes the query and
identifies the table. The query is performed on the table, where
performing the query includes querying one or more of the columnar
stripes.
Claims:
1. A computer-implemented method, the method comprising: receiving, at a
server system and from a remote computing device, a request to store data
at the server system; storing, by the server system, the data identified
in the request as an object in a repository at the server system;
receiving, at the server system and from a remote computing device, a
request to create a table, wherein the request identifies a name for the
table; creating, by the server system and at the server system, the table
with the name identified in the request; receiving, at the server system,
a request to import the data in the object into the table; importing, by
the server system, the data in the object into the table, wherein
importing the data in the object into the table includes: (i) converting
the data in the object into columnar stripes, and (ii) storing the
columnar stripes in association with the table; receiving, at the server
system and from a remote computing device, a request to perform a query
on the table, wherein the request includes the query and identifies the
table; and performing the query on the table, wherein performing the
query includes querying one or more of the columnar stripes.
2. The computer-implemented method of claim.
3. The computer-implemented method of claim 2, wherein the repository stores buckets in a flat namespace such that buckets are not nested.
4. The computer-implemented method of claim 2, wherein a bucket in the collection provides multiple different remote computing devices that correspond to different authenticated user accounts an ability to upload data from the different remote computing devices to the bucket.
5. The computer-implemented method of claim 1, wherein the request to store data, the request to create the table, and the request to perform the query are received at the server system through application programming interfaces and from one or more remote computing devices that submit the requests over the internet.
6. The computer-implemented method of claim 1: further comprising.
7. The computer-implemented method of claim 6, further comprising deleting the object from the repository in response to importing the data in the object into the table.
8. The computer-implemented method of claim 1,.
9. The computer-implemented method of claim 8, wherein a particular one of the structured records includes multiple values for a particular field, and wherein the multiple values for the particular field are stored in a particular columnar stripe.
10. The computer-implemented method of claim 9, wherein the particular record includes multiple values for another field, and wherein the multiple values for the other field are stored in a different columnar stripe.
11. The computer-implemented method of claim 10, wherein the particular columnar stripe stores, in an adjacent relationship, values for the particular field from multiple of the records.
12. The computer-implemented method of claim 10, wherein the particular record includes nested sets of values for fields.
13. The computer-implemented method of claim 8, wherein a schema specifies a structure of the structured records in the collection; further comprising: receiving, at the server system and from a remote computing device, a request to extend the schema to include a new field; and generating, by the server system, a new columnar stripe for the new field without modifying the columnar stripes.
14. The computer-implemented method of claim 8, wherein a schema specifies a structure of the structured records in the collection; further comprising:.
15. The computer-implemented method of claim 1, wherein creating the table includes generating, in the repository, a delegate object having the name of the table.
16. The computer-implemented method of claim 15, wherein creating the table includes generating in a database metadata for the table, the delegate object references the metadata for the table so that the metadata is accessed when the server system performs an operation on the delegate object.
17. The computer-implemented method of claim 1, wherein the table is a structured data set that is configured to be queried by the server system.
18. The computer-implemented method of claim 1, wherein the object and the one or more columnar stripes are replicated among geographically dispersed computing devices that form the server system.
19. A computer-implemented method, the method comprising: transmitting, by a computing device and to a remote server system, a request to store data at the server system such that, in response to receiving the request, the server system stores the data identified in the request as an object in a repository at the server system; transmitting, by the computing device and to the server system, a request to create a table, the request identifying a name for the table, such that, in response to receiving the request, the server system creates the table with the name identified in the request;; and.
20. A system, the system comprising:.
Description:
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. Provisional Application Ser. No. 61/346,011, filed on May 18, 2010, entitled "Data Storage and Processing Service," the entire contents of which are hereby incorporated by reference.
TECHNICAL FIELD
[0002] This document generally describes techniques, methods, systems, and mechanisms for providing a data storage and processing service.
BACKGROUND
[0003] The present disclosure generally relates to large-scale analytical data processing. Such one terabyte of compressed data in one second using today's commodity disks may require tens of thousands of disks. Similarly, CPU-intensive queries may need to run on thousands of cores to complete within seconds.
SUMMARY
[0004] A data storage and processing service is herein disclosed. The described service provides a scalable, interactive ad-hoc query system for analysis of nested data. By combining multi-level execution trees and columnar data layout, the described system and methods is capable of running rapid and efficient queries such as aggregation queries. A columnar storage representation for nested records, a prevalent data model that may be used in many web-scale and scientific datasets, is described. In accordance with an embodiment, a record is decomposed into column stripes, each column encoded as a set of blocks, each block containing field values and repetition and definition level information. Level information is generated using a tree of field writers, whose structure matches the field hierarchy in the record schema. The record can be assembled from the columnar data efficiently using a finite state machine that reads the field values and level information for each field and appends the values sequentially to the output records. Compared with traditional solutions that extract all of the data fields from every record, a finite state machine can be constructed that accesses a limited amount of data fields in all or a portion of the records (e.g., a single data field in all of the records). Moreover, by storing additional metadata such as constraint information with the columnar storage representation, additional types of queries can be supported.
[0005] A multi-level serving tree is used to execute queries. In one embodiment, a root server receives an incoming query, reads metadata from the tables, and routes the queries to a next level in the serving tree. Leaf servers communicate with a storage layer or access the data on local storage, where the stored data can be replicated, and read stripes of nested data in the columnar representation. Each server can have an internal execution tree corresponding to a physical query execution plan, comprising a set of iterators that scan input columns and emit results of aggregates and scalar functions annotated with level information. In another embodiment, a query dispatcher is provided which schedules queries based on their priorities and balances the load. The query dispatcher also provides fault tolerance when one server becomes much slower than others or as a replica becomes unreachable. The query dispatcher can compute a histogram of processing times for execution threads on the leaf servers and reschedule to another server when processing time takes a disproportionate amount of time.
[0006] A web service may provide users remote access to the query system and a supporting data storage system. Users of the web service may upload data to the data storage system for hosted storage. A portion of uploaded data may include collections of nested records and may be stored as an object. The web service may provide remote data hosting for multiple users, allowing the multiple users to stream data to the web service and aggregate the data in a single location. Users may create tables on which to perform queries, and may import the data in one or more objects stored in the data storage system into the tables. The import process can include converting nested records in an object into columnar data, and storing the columnar data in a different data layer than the objects. Thus, from a user's perspective, a table may be filled with data from objects, but actually may instead reference underlying sets of columnar data. In this case, queries of the tables by web service users may cause the query system to query particular columns of data that underlie the tables.
[0007] The columnar data may be queried in situ. Maintaining the columnar data on a common storage layer and providing mechanisms to assemble records from the columnar data enables operability with data management tools that analyze data in a record structure. The system may scale to numerous CPUs and be capable of rapidly reading large amounts of data. Particular embodiments can be implemented, in certain instances, to realize one or more of the following advantages. Nested data may be operated on in situ, such that the data may be accessed without loading the data with a database management system. Queries of nested data may be performed in a reduced execution time than required by other analysis programs. A columnar storage data structure that is implemented on a common storage layer enables multiple different analysis programs to access the columnar storage data structure.
[0008] As an alternative to the attached claims and the embodiments described in the below description, the present invention could also be described by one of the following embodiments:
[0009] Embodiment 1 is directed to a computer-implemented method. The method comprises receiving, at a server system and from a remote computing device, a request to store data at the server system. The method comprises storing, by the server system, the data identified in the request as an object in a repository at the server system. The method comprises receiving, at the server system and from a remote computing device, a request to create a table, wherein the request identifies a name for the table. The method comprises creating, by the server system and at the server system, the table with the name identified in the request. The method comprises receiving, at the server system, a request to import the data in the object into the table. The method comprises importing, by the server system, the data in the object into the table, wherein importing the data in the object into the table includes: (i) converting the data in the object into columnar stripes, and (ii) storing the columnar stripes in association with the table. The method comprises receiving, at the server system and from a remote computing device, a request to perform a query on the table, wherein the request includes the query and identifies the table. The method comprises performing the query on the table, wherein performing the query includes querying one or more of the columnar stripes.
[0010] Embodiment 2 is related to the method of embodiment.
[0011] Embodiment 3 is directed to the method of embodiment 2, wherein the repository stores buckets in a flat namespace such that buckets are not nested.
[0012] Embodiment 4 is directed to the method of embodiment 2, wherein a bucket in the collection provides multiple different remote computing devices that correspond to different authenticated user accounts an ability to upload data from the different remote computing devices to the bucket.
[0013] Embodiment 5 is directed to the method of any one of embodiments 1-4, wherein the request to store data, the request to create the table, and the request to perform the query are received at the server system through application programming interfaces and from one or more remote computing devices that submit the requests over the internet.
[0014] Embodiment 6 is directed to the method of any one of embodiments 1-5, wherein the method further comprises.
[0015] Embodiment 7 is directed to the method of embodiment 6, wherein the method further comprises deleting the object from the repository in response to importing the data in the object into the table.
[0016] Embodiment 8 is directed to the method of any one of embodiments 1-7,.
[0017] Embodiment 9 is directed to the method of embodiment 8, wherein a particular one of the structured records includes multiple values for a particular field, and wherein the multiple values for the particular field are stored in a particular columnar stripe.
[0018] Embodiment 10 is directed to the method of embodiment 9, wherein the particular record includes multiple values for another field, and wherein the multiple values for the other field are stored in a different columnar stripe.
[0019] Embodiment 11 is directed to the method of embodiment 10, wherein the particular columnar stripe stores, in an adjacent relationship, values for the particular field from multiple of the records.
[0020] Embodiment 12 is directed to the method of embodiment 11, wherein the particular record includes nested sets of values for fields.
[0021] Embodiment 13 is directed to the method of embodiment 8, wherein a schema specifies a structure of the structured records in the collection; and wherein the method further comprises: receiving, at the server system and from a remote computing device, a request to extend the schema to include a new field; and generating, by the server system, a new columnar stripe for the new field without modifying the columnar stripes.
[0022] Embodiment 14 is directed to the method of embodiment 8, wherein a schema specifies a structure of the structured records in the collection; and wherein the method further comprises:.
[0023] Embodiment 15 is directed to the method of any one of embodiments 1-14, wherein creating the table includes generating, in the repository, a delegate object having the name of the table.
[0024] Embodiment 16 is directed to the method of embodiment 15, wherein creating the table includes generating in a database metadata for the table, the delegate object references the metadata for the table so that the metadata is accessed when the server system performs an operation on the delegate object.
[0025] Embodiment 17 is directed to the method of any one of embodiments 1-16, wherein the table is a structured data set that is configured to be queried by the server system.
[0026] Embodiment 18 is directed to the method of any one of embodiments 1-17, wherein the object and the one or more columnar stripes are replicated among geographically dispersed computing devices that form the server system.
[0027] Embodiment 19 is directed to a computer-implemented method. The method comprises transmitting, by a computing device and to a remote server system, a request to store data at the server system such that, in response to receiving the request, the server system stores the data identified in the request as an object in a repository at the server system. The method comprises transmitting, by the computing device and to the server system, a request to create a table, the request identifying a name for the table, such that, in response to receiving the request, the server system creates the table with the name identified in the request. The method comprises. The method comprises.
[0028] Other embodiments of the described aspects include corresponding computer-readable storage devices storing instructions that, when executed by one or more processing devices, perform operations according to the above-described methods. Other embodiments may include systems and apparatus that include the described computer-readable storage devices and that are configured to execute the operations using one or more processing devices.
[0029] Embodiment 20 includes a system that comprises:.
[0030] The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
DESCRIPTION OF DRAWINGS
[0031] FIG. 1 illustrates record-wise v. columnar representation of nested data.
[0032] FIG. 2 illustrates two sample nested records and their schema.
[0033] FIG. 3 illustrates column-striped representations of the sample nested records.
[0034] FIG. 4 is an algorithm for dissecting a record into columns.
[0035] FIG. 5 illustrates an automaton for performing complete record assembly.
[0036] FIG. 6 illustrates an automaton for assembly records from two fields, and the records that the automaton produces.
[0037] FIG. 7 is an algorithm for constructing a record assembly automaton.
[0038] FIG. 8 is an algorithm for assembling a record from columnar data.
[0039] FIG. 9 depicts a sample query that performs projection, selection, and within-record aggregation.
[0040] FIG. 10 illustrates a system architecture and execution inside a server node.
[0041] FIG. 11 is a table illustrating the datasets used in the experimental study.
[0042] FIG. 12 is a graph that illustrates the performance breakdown that may occur when reading from a local disk.
[0043] FIG. 13 is a graph that illustrates execution of both MapReduce and the described system on columnar v. record-oriented storage.
[0044] FIG. 14 is a graph that illustrates the execution time as a function of serving tree levels for two aggregation queries.
[0045] FIG. 15 is a graph that illustrates histograms of processing times.
[0046] FIG. 16 is a graph that illustrates execution time when the system is scaled from 1000 to 4000 nodes using a top-k query.
[0047] FIG. 17 is a graph that illustrates a percentage of processed tables as a function of processing time per tablet.
[0048] FIG. 18 is a graph that illustrates query response time distribution in a monthly workload.
[0049] FIG. 19 is a block diagram of a system for generating and processing columnar storage representations of nested records.
[0050] FIG. 20 is a flow chart of an example process for generating columnar data.
[0051] FIG. 21 is a block diagram illustrating an example of a system that implements a web service for data storage and processing.
[0052] FIG. 22 is a flowchart showing an example of a process for performing data storage and processing.
[0053] FIG. 23 is a block diagram of computing devices that may be used to implement the systems and methods described in this document, as either a client or as a server or plurality of servers.
[0054] Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION
[0055] This document describes techniques, methods, systems, and mechanisms for a data storage and processing service. The described system may generate and process columnar storage representations of nested records. As an illustration, an organization may store data from web pages in records of nested information. The nested information may be compiled in a columnar data storage format that enables efficient queries of the data using a multi-level execution tree. The columnar data may be re-assembled into records for input into analysis programs that operate on record-oriented data.
[0056] More specifically, each record may be an instantiation of a schema that defines a formatting of records, where the records are created in accordance with the schema. For example, a schema may identify various fields for storing information about a web page and a structure for organizing fields in a record and their corresponding values. When an instance of a record for describing the characteristics of a web page is generated, the record may include for each field a data element and a corresponding value. The data element may define the semantics of the value in accordance with a definition in the schema. The term data element and field may be used interchangeably in this document. Field may also refer to a combination of a data element and a corresponding value.
[0057] A particular record may need not include all of the fields that are defined by a schema. Thus, the schema may serve as a `template` from which fields may be selected for the particular record. For example, the schema may include a field for defining information about video content in a web page. If a web page does not include video content, then the record corresponding to the web page may not include the field from the schema that defines information about videos on the web page. Thus, some of the fields may be `optional.`
[0058] Some of the fields in a record, however, may be `required.` For example, a `required` field in the schema may be a Uniform Resource Locator (URL) of a source location for the document that served the web page. The field may be required because every web page document may be retrieved from a source location (i.e., there is a URL available for every document) and because the field may be required to further process information on the web page (e.g., to determine if the content has changed).
[0059] A field may also be `repeatable.` A field that is in the schema and that is defined as repeatable may be replicated at the location defined by the schema repeatedly in a instantiation of the schema (i.e., in a record). For example, a schema may include a field that is for defining documents that link to the web page. The schema may only specify the field a single time, but may indicate that the field is repeatable (e.g., because several documents may link to a particular web page). Thus, a record for the web page may include multiple fields that identify a value for a linking web page. The repeated fields may be located at a same level and nested beneath a parent field in the record (as discussed in more detail below).
[0060] The fields of the schema (and thus the fields in the records) may be nested. In other words, some fields may be children of other fields, which may be referenced as the parent fields, grandparent fields, etc. In some examples, children nodes are those nodes in the schema that are found within a pair of opening and closing curly brackets immediately following the parent node. Other implementations for nesting, however, may be utilized (e.g., the use of a start tag for the field and an end tag for the field). Thus, except for the fields that are at the highest level (e.g., the fields that are not children of any other fields), each field may have a parent field.
[0061] Nesting may be helpful for organizing information into conceptually-related chunks of information. Returning to our earlier example, the schema may include a `Video` field. The `Video` field may include several children fields that may identify the characteristics of the video (e.g., how long the video is, the format of the video, and the resolution of the video). Thus, when a record is constructed, children nodes may not be placed in the record if their parent nodes are not present. In other words, a record for a web page that does not include a video may not include a `VideoLength` field because the record does not include a `Video` field (i.e., the parent of the `VideoLength` field). Application programs that enable viewing and editing a record may visually nest the dependent children off of the parent children (e.g., indent the children to the right of the parent field).
[0062] Analyzing millions of records may be time consuming. In some examples a user is interested in a data from a single field, but each of the records must be accessed in its entirety. For example, a user may request that an analysis program check each of millions of records to identify those records that are associated with web pages that include videos that are longer than ten minutes and that have a `High` resolution. Because each record may be stored as a separate data structure, each entire record may need to be loaded into a database management system in order to query the record to determine if the record includes the particular combination of video length and resolution.
[0063] Such a loading of every single record may be prohibitively expensive, both on the quantity of servers that are required to perform the task and an amount of time necessary to complete the query. Significant time savings can be obtained by storing all of the values for a particular field--selected from across the millions of records--together in a contiguous portion of memory. Such storage of values from several records but for a particular field is called columnar storage. In contrast, the example where information for a particular record is stored contiguously in memory is referred to as record-oriented storage.
[0064] Columnar storage for nested records, however, poses unique difficulties. A field in a record may be identified by its path, which may include a listing of the field and the parent fields (e.g., GrandParent.Parent.Child). Because one or more of the fields in the path may be repeating, there may be several instances of a field with the same path name. Thus, when looking at a consecutive listing of columnar data for a particular field, a mechanism is needed to identify which values belong to which records, and for those records that include multiple values for a particular path, what is the respective location of the value in the record. In other words, given a sequence of values in a columnar structure, a mechanism is needed to reconstruct the structure of the record from the values.
[0065] The mechanism for reconstructing the structure of a record from columnar data includes storing, for each value in the columnar data, a `repetition` level and a `definition` level. Each `level` is a sequence of bits that represents a number. For example, a `level` of 3 may be represented by two bits (e.g., `11`). In another example, a `level` of 5 may be represented by three bits (e.g., `101`).
[0066] The `repetition` level that is stored for a particular value indicates the field in the value's path that has most recently repeated. As an illustration, a column of values may be stored for a field with the path `Video.Resolution.Width.` A repetition level of `1` may indicate that the `Video` field most recently repeated, while a repetition level of `2` may indicate that the `Resolution` field most recently repeated. Recently repeating can indicate, from the position of the value in the record from which the value was selected and working upwards towards the beginning of the document, which field in the path `Video.Resolution.Width` is the first to reach a count of two (e.g., which field is encountered for the second time first).
[0067] For example, working upwards from the location of the `Width` value, each field is encountered a single time. Finding a second instance of each field requires traversing to the depths of the next, adjacent nested field (and possibly to further nestings). Thus, a `Video` field may be encountered that does not include any `Resolution` children (e.g., because the `Resolution` field is optional or a repeating field). Thus, the `Video` field has been encountered a second time and is thus the most recently repeated field. A repetition level of `1` is assigned to the value.
[0068] A repetition level of `0` may indicate that the field does not include a most recently repeated value (e.g., it has been encountered for the first time in the record during a top-down scan). In various examples, a `required` field in a path does not have a repetition level. For example, if the `Resolution` field is required for the `Video.Resolution.Width` path, the range of resolution levels may be either `0` or `1.` `Resolution` may not have a level because it is always present in the record when the `Video` field is present. Thus, if `Resolution` was assigned a level of `2,` it may always be encountered before `Video` and thus a level of `1` may not ever be assigned. Thus, not including a repetition level for required fields may enable a number of different resolution levels to be reduced, and a number of bits to represent the resolution level may be reduced.
[0069] If the field `Width` in the above example is an `optional` or `repeating` field, a record may not always include a value for the `Width` field. Thus, a column of values for the `Video.Resolution.Width` path may use a mechanism to designate when a `Video` or a `Video.Resolution` path is found in the record but the `Width` field has not been instantiated in the record. This mechanism may include storing, in the `Video.Resolution.Width` column of data, a `Definition` level for each `Video` or `Video.Resolution` field in the record regardless whether the `Width` field is instantiated. The `Definition` level may indicate how many of the fields in the `Video.Resolution.Width` path that could be missing (e.g., because the field is optional or repeatable) are actually present.
[0070] Thus, if the field `Video` is present in the record but no corresponding `Resolution` child is instantiated, a definition level of `1` may be recorded in the `Video.Resolution.Width` column. If the field `Video.Resolution` is present in the record, but no corresponding `Width` child is instantiated, a definition level of `2` may be recorded. If the field `Video.Resolution.Width` is present in the record, a definition level of `3` may be recorded.
[0071] Therefore, whenever the `Definition` level (which represents the number of fields that could be undefined but are actually defined) is less than the number of fields that could be defined, a missing occurrence of the `Width` field may be identified. The combination of the `Repetition` level and the `Definition` level may enable the structure of the record to be reconstructed.
[0072] A column of data for a particular field (e.g., the `Video.Resolution.Width` field) may include the values for the field from multiple records, corresponding repetition and definition levels (acknowledging that some `missing` values may have a repetition and a definition level), and header information. In some examples, the values are stored consecutively and adjacent. In other words, if a value for one `Video.Resolution.Width` field was `700` and the value for a next `Video.Resolution.Width` field was `800,` a portion of the column as stored in memory may read `700800.` In this example, a header in the column may identify that the each value has a fixed width (e.g., a fixed binary representation to hold the numbers 700 and 800).
[0073] In some examples, the stored values are represented by strings. For example, instances of the `Width` field may include the values `Small` and `Medium.` In some examples, the various string values may be a fixed length (e.g., a null value may be added to the beginning or end of the `Small` value to make the string the same length as the `Medium` value). In some examples, however, each stored string may include an identifier in a beginning portion of the string that identifies a length of the string. For example, the `small` value may include an identifier that indicates that the string is five digits long (or a corresponding number of binary bits).
[0074] Because the values may be stored consecutively in the columnar stripe, the `repetition` and `definition` levels may be stored at the beginning of the columnar stripe. In some examples, the `repetition` and `definition` levels are stored in pairs for a particular value (whether instantiated or missing). As an illustration, a repetition level of 3 may be stored in the first four bits of a byte and a definition level of 1 may be stored in the last four bits of the byte. A next byte in the header may include a repetition level and a definition level for the next instance of the field in the record (or the first instance in the subsequent record).
[0075] The number of bits used to represent the repetition and definition levels may be based on a maximum level value. For example, if the maximum repetition level is 3, the repetition level may be represented with two bits. If the maximum repetition level is 4 the repetition level may be represented with three bits. The header may include information that identifies the length of the repetition and definition levels.
[0076] In various examples, the repetition levels may be stored consecutively in memory and the definition levels may be stored consecutively in memory (e.g., not in pairs). In various examples, the repetition and definition levels may be stored in a group with their corresponding value (if the value is instantiated). In other words, a sequence of information in the columnar stripe may read Value1:RepetitionLevel1:DefinitionLevel1:Value2:RepetitionLevel2:Definiti- onLevel2, and so on.
[0077] The columnar stripes may be compressed into blocks of information. For example, each columnar stripe may be split into a set of blocks, with each block including its own respective header. A first block may include the first 800,000 values and a second block may include a second 800,000 values from a stripe of 1.6 million values. A block header may include the repetition and definition levels along with additional information that may be used to help analyze the portion of the columnar stripe that is represented by the block, and to reconstruct the columnar stripe.
[0078] In some examples, the block header includes an `Assertion` value that defines a type of data that is found in the block's values. For example, a block for the `Video.Resolution.Width` field may not include any values that list `Large` width resolution. Thus, the `Assertion` value may indicate that the values only include `Small` and `Medium` values. If a query is performed for records that include `High` width resolution videos, then the described block may be avoided by the querying system.
[0079] The system described in this document may perform queries on columnar stripes without reconstructing the information in the columnar stripes into records, and without loading information from the columnar stripes into a database (e.g., without using `Insert` clause). Thus, the data may be accessed in situ, which may provide computational analysis time savings on the order of magnitudes.
[0080] The querying system may employ many of the clauses employed for querying relational databases. Additional clauses that are specific to non-relational data, however, may be employed. For example, a WITHIN clause may allow for operations to be performed on multiple instances of a field within a single record or a portion of a record. A relational database, however, may be unable to store more than a single instance of a field in a row (e.g., a representation of a record). Thus, a query on a relational database may be fundamentally unable to perform queries `within` a record.
[0081] As an example of the WITHIN clause, values for a particular field may be multiplied. Supposing that the query instructions request that all values for `MutualFund.InterestRate` be multiplied together for a particular record (where each record may be for a particular account holder). The querying system may find all of the `MutualFund.InterestRate` values within the single record and multiply them together.
[0082] Another example of a clause that may be specific to non-relational nested data is the OMIT IF clause. This clause may enable a record to be filtered to remove instances of fields if a particular condition is met (e.g., a new columnar stripe or record may be created with specified fields removed). As an illustration, a stripe of values that list employee salaries may be queried and a new stripe that removes employee's with salaries above $90,000 may be generated using the OMIT IF clause.
[0083] The querying system may be hosted by a server system and provided over the internet to remote computing devices through application programming interfaces (API). In general, the columnar data may be represented to external users of the remote computing devices as stored within tables of information. The users may generate the tables using API calls and may fill the tables with data from a repository of objects.
[0084] The users may use separate API calls to load objects into the repository. For example, the server system may also implement an internet-accessible storage system that enables users to push data to the server system for remote hosting. In this manner, the data storage service may serve as a repository for data aggregated from many geographically dispersed computing devices. For example, internet website logs may be streamed by hundreds of computers to the storage system and be stored as individual objects in one or more "buckets" at the repository. A given bucket may have an access control list that determines which computing devices or user accounts are authorized to upload objects to the bucket or to access objects in a bucket. Similarly, individual objects may have associated access control lists that control which devices or user accounts are able to access or manipulate the object.
[0085] A user may explicitly request that the data in objects in a bucket be transferred to a table, or may establish a service that monitors the bucket and transfers the data in newly placed objects into the table. In some implementations, the transfer of data in the objects to the table may include converting the data format of the objects to a different format, generating columnar stripes for the data in the records, and placing the columnar stripes in a different repository. Metadata for the table may be updated to reference the columnar stripes that include the converted data for the imported objects.
[0086] Thus, in some implementations, when the querying service receives a request to query a table, the metadata for the table is located and a query is performed on the columnar data that underlies the table. The output of the query may be placed in a different table, provided to the remote device requesting the query, or may be stored in the repository of objects as an object (e.g., an object that includes a collection of records).
1. Introduction
[0087] Large-scale parallel computing may be performed using shared clusters of commodity machines. See L. A. Barroso and U. Holzle. The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines. Morgan & Claypool Publishers, 2009. A cluster may host a multitude of distributed applications that share resources, have widely varying workloads, and run on machines with different hardware parameters. An individual computing machine in a distributed application may take much longer to execute a given task than others, or may never complete due to failures or preemption by a cluster management system. Hence, dealing with stragglers (e.g., computing tasks with significant latency) and failures may achieve fast execution and fault tolerance. See G. Czajkowski. Sorting 1 PB with MapReduce. Official Google Blog, November 2008. At.
[0088] The data used in web and scientific computing is often nonrelational. Hence, a flexible data model may be beneficial in these domains. Data structures used in programming languages, messages exchanged by distributed systems, web traffic logs, etc. may lend themselves to a nested representation. For example, a nested representation of data may include a multiple fields that each include several levels of children fields. Some of the children fields may include corresponding data. Normalizing and recombining such data at web scale may be computationally expensive. A nested data model underlies some of the structured data processing at major web companies.
[0089] This document describes a system that supports interactive analysis of very large datasets over shared clusters of commodity machines. Unlike traditional databases, it is capable of operating on in situ nested data. In situ refers to the ability to access data `in place`, for example, in a distributed file system like Google File System (see S. Ghemawat, H. Gobioff, and S.-T. Leung. The Google File System. In SOSP, 2003) or another storage layer like Bigtable (see F. Chang, J. Dean, S. Ghemawat, W. C. Hsieh, D. A. Wallach, M. Burrows, T. Chandra, A. Fikes, and R. Gruber. Bigtable: A Distributed Storage System for Structured Data. In OSDI, 2006).
[0090] The system can execute many queries over such data that may ordinarily require a sequence of MapReduce jobs, but at a fraction of the execution time. See J. Dean and S. Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. In OSDI, 2004. The described system may be used in conjunction with MapReduce to analyze outputs of MapReduce pipelines or rapidly prototype larger computations. Examples of using the system include: [0091] Analysis of web logs and crawled web documents; [0092] Install data for applications served by an online marketplace; [0093] Crash data for application products; [0094] Multimedia playback statistics; [0095] OCR results from scans of books; [0096] Spam analysis; [0097] Debugging of map tiles; [0098] Tablet migrations in managed Bigtable instances; [0099] Results of tests run on a distributed build system; [0100] Disk I/O statistics for hundreds of thousands of disks; [0101] Execution logs of MapReduce jobs across several data centers; and [0102] Symbols and dependencies in a codebase.
[0103] The described system builds on ideas from web search and parallel database management systems. First, its architecture builds on the concept of a serving tree used in distributed search engines. See J. Dean. Challenges in Building Large-Scale Information Retrieval Systems: Invited Talk. In WSDM, 2009. Like a web search request, a query gets pushed down the tree and rewritten at each step. The result of the query is assembled by aggregating the replies received from lower levels of the tree.
[0104] Second, the described system provides a high-level, SQL-like language to express ad hoc queries. In contrast to layers such as Pig (see C. Olston, B. Reed, U. Srivastava, R. Kumar, and A. Tomkins. Pig Latin: a Not-so-Foreign Language for Data Processing. In SIGMOD, 2008.) and Hive (Hive., 2009), the querying system executes queries natively without translating them into MapReduce jobs.
[0105] Lastly, the described system uses a column-striped storage representation, which enables it to read less data from secondary storage and reduce CPU cost due to cheaper compression. Column stores for analyzing relational data, (D. J. Abadi, P. A. Boncz, and S. Harizopoulos. Column-Oriented Database Systems. VLDB, 2(2), 2009), are not believed to have extended to nested data models. The columnar storage format that is described may be supported by MapReduce, Sawzall (see R. Pike, S. Dorward, R. Griesemer, and S. Quinlan. Interpreting the Data: Parallel Analysis with Sawzall. Scientific Programming, 13(4), 2005), and FlumeJava (see C. Chambers, A. Raniwala, F. Perry, S. Adams, R. Henry, R. Bradshaw, and N. Weizenbaum. FlumeJava: Easy, Efficient Data-Parallel Pipelines. In PLDI, 2010).
[0106] In Section 4, this document describes a columnar storage format for nested data. Algorithms are presented for dissecting nested records into columns and reassembling them.
[0107] In Section 5, a query language for processing data in that is stored in the columnar storage format is described. The query language and execution of the language are designed to operate efficiently on column-striped nested data and do not require restructuring of nested records.
[0108] In Section 6, an illustration of applying execution trees that are used in web search serving systems to database processing is provided. The benefits for answering aggregation queries efficiently is explained.
[0109] In Section 7, experiments conducted on system instances are presented.
Section 2: Example Scenario
[0110] Suppose that Alice, an engineer at a web-search company, comes up with an idea for extracting new kinds of signals from web pages. She runs a MapReduce job that cranks through the input data that includes content from the web pages and produces a dataset containing the new signals, stored in billions of records in a distributed file system. To analyze the results of her experiment, she launches the system described in this document and executes several interactive commands:
[0111] Define Table t AS/path/to/data/*
[0112] Select Top(signal1, 100), Count( ) from t
[0113] Alice's commands execute in seconds. She runs a few other queries to convince herself that her algorithm works. She finds an irregularity in signal1 and digs deeper by writing a FlumeJava program that performs a more complex analytical computation over her output dataset. Once the issue is fixed, she sets up a pipeline which processes the incoming input data continuously. She formulates a few canned SQL queries that aggregate the results of her pipeline across various dimensions, and adds them to an interactive dashboard (e.g., a web page about a service that explains the service and details statistics on the service). Finally, she registers her new dataset in a catalog so other engineers can locate and query the dataset quickly.
[0114] The above scenario may require interoperation between the query processor and other data management tools. The first ingredient for such interoperation is a common storage layer. The Google File System is one such distributed storage layer that may be used. The Google File System manages very large replicated datasets across thousands of machines and tens of thousands of disks.
[0115] Replication helps preserve the data despite faulty hardware and achieve fast response times in presence of stragglers. A high-performance shared storage layer is a key enabling factor for in situ data management. It allows accessing the data without a time-consuming loading phase, which is a major impedance to database usage in analytical data processing (where it is often possible to run dozens of MapReduce analyses before a database management system is able to load the data and execute a single query). For example, when a database management system is used to analyze data, the database may need to be loaded with data using `Insert` commands. Such loading may not be required by the described system. As an added benefit, data in a file system can be conveniently manipulated using standard tools, e.g., to transfer to another cluster, change access privileges, or identify a subset of data for analysis based on file names.
[0116] A second ingredient for building interoperable data management components is a shared storage format. Columnar storage is used for flat relational data but adapting columnar storage to a nested data model allows the technique to be applied to web data. FIG. 1 illustrates the idea that all values of a nested field in a data structure are stored contiguously. For example, in the column-oriented representation of nested data, all values for a particular nested field within a data structure (e.g., the field A.B.C) are stored adjacent to each other and contiguously in memory. Hence, values for the field A.B.C can be retrieved from memory without reading values from the field A.E and values from the field A.B.D.
[0117] Additionally, values for the same particular field in different instances of a data structure (e.g., a `record`) may be stored contiguously. For example, the values for field A.B.C for the record `r1` are stored adjacent to the values for the same field for the record `r2.` To the contrary, in the `record-oriented` representation of nested data, values for all fields within a particular record are stored contiguously. In other words, the data values for a particular field are not bunched together.
[0118] The challenge that the described columnar storage format addresses is how to preserve all structural information and be able to reconstruct records from an arbitrary subset of fields. This document next discusses the data model from which the fields in the columnar storage format may be filled, and then turn to algorithms for processing the columnar storage and query processing on data in the columnar storage.
Section 3: Data Model
[0119] This section describes the data model used by the described system and introduces some terminology used later. The described Protocol Buffers data model originated in the context of distributed systems, and is available as an open source implementation. See (Protocol Buffers: Developer Guide. Available at). The data model is based on strongly-typed nested records. Its abstract syntax is given by:
τ=dom|<Al:τ[*|?], . . . , An:τ[*|?]>
where τ is an atomic type or a record type. Atomic types in dom comprise integers, floating-point numbers, strings, etc. Records consist of one or multiple fields. Field i in a record has a name Ai and an optional multiplicity label. Repeated fields (*) may occur multiple times in a record. They are interpreted as lists of values, i.e., the order of field occurrences in a record is significant. Optional fields (?) may be missing from the record. Otherwise, a field is required (e.g., must appear exactly once).
[0120] As an illustration, FIG. 2 depicts a schema that defines a record type `Document,` which represents a web document. The schema definition uses the Protocol Buffers syntax.. FIG. 2 also shows two sample records, r1 and r2, that conform to the schema. The record structure is outlined using indentation. The sample records r1 and R2 in FIG. 2 are used explain the algorithms throughout this document. The fields defined in the schema form a tree hierarchy. The full path of a nested field is denoted using a dotted notation, e.g., Name.Language.Code is the full path name for the `Code` field depicted in FIG. 2.
[0121] The nested data model backs a platform-neutral, extensible mechanism for serializing structured data. Code generation tools produce bindings for different programming languages such as C++ or Java. Cross-language interoperability is achieved using a standard binary on-the-wire representation of records, in which field values are laid out sequentially as they occur in the record. This way, a MapReduce program written in Java can consume records from a data source exposed via a C++ library. Thus, if records are stored in a columnar representation, assembling them fast may assist interoperation with MapReduce and other data processing tools.
Section 4: Nested Columnar Storage
[0122] As illustrated in FIG. 1, a goal is to store all values of a given field consecutively to improve retrieval efficiency. In this section, the challenges of lossless representation of record structure in a columnar format (Section 4.1), fast encoding (Section 4.2), and efficient record assembly (Section 4.3) are addressed.
Section 4.1: Repetition and Definition Levels
[0123] A consecutive list of values alone do not convey the structure of a record. Given two values of a field that is repeated in a record, a system may not be able to determine at what `level` the value is repeated (e.g., whether the two values are from different records or are from the same record). Likewise, if an optional field is missing from a record, values alone may not convey which enclosing records were defined explicitly and which were not. The concepts of repetition and definition levels are thus introduced. FIG. 3 includes tables that summarize the repetition and definition levels for atomic fields in the sample records that are depicted in FIG. 1.
[0124] Repetition Levels. Consider the field `Code` in FIG. 2. It occurs three times in record `r1.` Occurrences `en-us` and `en` are inside the first `Name` field, while `en-gb` is in the third `Name` field. To disambiguate these occurrences in the columnar structure, a repetition level is attached to each value that is to be stored in the columnar structure. The repetition level indicates at what repeated field in the field's path the value has repeated. For example, the field path Name.Language.Code contains two fields that are repeated, `Name` and `Language.` Hence, the repetition level of Code ranges between 0 and 2. Level 0 denotes the start of a new record, level 1 denotes a recent repetition at the `Name` field, and level 2 denotes a recent repetition at the `Language` field.
[0125] As an illustration of determining the level for a field, record `r1` may be scanned from the top down. The value `en-us` is first encountered and a check may be performed to identify the field in the Name.Language.Code path that has most recently repeated in the record. In this example, none of the fields have been repeated and thus, the repetition level is 0. The value `en` is next encountered for the Name.Language.Code path and the field `Language` is identified as the field that has most recently repeated. For example, scanning upwards from the value `en,` the first field in the Name.Language.Code path that repeats is `Language.` Thus, the repetition level is 2 (e.g., because `2` corresponds to the `Language` field because `Language` is the second field in the Name.Language.Code path that repeats). Finally, when the value `en-gb` is encountered, the field `Name` has repeated most recently (the `Language` field occurred only once after Name), so the repetition level is 1. In other words, the repetition level for a value may be a number that represents a most recently repeated field. Thus, the repetition levels of Code values in record `r1` are 0, 2, 1.
[0126] Notice that the second `Name` field in record `r1` does not contain any values for the field `Code.` To determine that `en-gb` occurs as a value for a field nested within the third instance of the field `Name,` and not in the second instance, a NULL value is added between the values `en` and `en-gb` as they are stored in the columnar structure (see FIG. 3). `Code` is a required child field of the `Language` field, so the fact that a value for the `Code` field is missing implies that the `Language` field is not also not defined. In general though, determining the level up to which nested records exist may require additional information.
[0127] Definition Levels. Each value of a field with path `p,` especially every NULL value, has a `definition level` that specifies how many fields in the path `p` that could be undefined (e.g., because the fields are optional or repeated) are actually present in the record. To illustrate, observe that record `r1` has no `Backward` fields for the `Links` field. Still, the field `Links` is defined (at a level of 1). To preserve this information, a NULL value with definition level of 1 is added to the `Links.Backward` column.
[0128] In other words, specifying a level of 1 for the `Links.Backward` path indicates that `1` field that was optional or repeated (i.e., the `Links` field) was defined in a path that includes two fields that are optional or repeated (i.e., the `Links` field and the `Backward` field). Thus, a definition of `1` indicates that the `Backward` field was not instantiated. Similarly, the missing occurrence of `Name.Language.Country` in record `r2` carries a definition level 1, while its missing occurrences in record `r1` have definition levels of 2 (inside `Name.Language`) and 1 (inside `Name`), respectively. The encoding procedure outlined above may preserve the record structure losslessly.
[0129] Encoding. As stored in memory, each column that corresponds to a particular field may be stored with a header that includes a contiguous listing of repetition and definition values, followed by a contiguous listing of the substantive values. Each repletion and definition value may be stored as bit sequences (e.g., in a single byte). For example, the first four bits of a byte may be used to represent the repetition level for a particular value and the last four bits may be used to represent the definition level. In some examples, the header may include definitions of lengths of the number of bits so that delimiters may not be used. Thus, bits may only be used as necessary. For example, if the maximum definition level is 3, two bits per definition level may be used.
[0130] Thus, a representation of columnar data for a single field (e.g., the `Name.Language.Code` field) may be stored in memory with a sequence of bytes representing the repetition and definition levels for a corresponding sequence of values, followed by a sequence of values. NULL values, however, may not be stored explicitly as they may be determined by analyzing the definition levels. For instance, any definition level that is smaller than the number of repeated and optional fields in a field's path can denote a NULL. Thus, a system may be able to determine where in the listing of consecutive values a NULL value should be inserted or inferred. In some examples, definition levels are not stored for values that are always defined. Similarly, repetition levels may only be stored if required. For example, a definition level of 0 implies a repetition level of 0, so the latter may be omitted. In fact, referencing the structures illustrated in FIG. 3, no levels may stored for the `Docld` field.
[0131] A representation of columnar data in memory may be broken up into a set of blocks. Each block may include a header that includes the repetition and definition level information, and a subsequent listing of the values for the field. Each header may include a `constraint` value that indicates an allowable range of values in the block. Thus, the described system may identify which blocks include data that the system is interested in. The constraint can also indicate other properties of the values, e.g., whether the values have been sorted. In general, the `constraint` may be thought of as an `assertion` about what kind of values are found in the block. Each block may be compressed.
Section 4.2: Splitting Records into Columns
[0132] The above description presented an encoding of the record structure in a columnar format. A challenge is how to produce column stripes with repetition and definition levels efficiently. The base algorithm for computing repetition and definition levels is provided below. The algorithm recurses into the record structure and computes the levels for each field value. As illustrated earlier, repetition and definition levels may need to be computed even if field values are missing. Many datasets are sparse and it may not be uncommon to have a schema with thousands of fields, only a hundred of which are used in a given record. Hence, it may be beneficial to process missing fields as cheaply as possible. To produce column stripes, a tree of field writers is created, whose structure matches the field hierarchy in the schema. The basic idea is to update field writers only when they have their own data, and not try to propagate parent state down the tree unless absolutely necessary. To do that, child writers inherit the levels from their parents. A child writer synchronizes to its parent's levels whenever a new value is added.
[0133] An example algorithm for decomposing a record into columns is shown in FIG. 4. Procedure `DissectRecord` is passed an instance of a `RecordDecoder,` which is used to traverse binary-encoded records. `FieldWriters` form a tree hierarchy isomorphic to that of the input schema. The root `FieldWriter` is passed to the algorithm for each new record, with `repetitionLevel` set to 0. The primary job of the `DissectRecord` procedure is to maintain the current `repetitionLevel.` The current `definitionLevel` is uniquely determined by the tree position of the current writer, as the sum of the number of optional and repeated fields in the field's path.
[0134] The while-loop of the algorithm (Line 5) iterates over all atomic and record-valued fields contained in a given record. The set `seen Fields` tracks whether or not a field has been seen in the record. It is used to determine what field has repeated most recently. The child repetition level `chRepetitionLevel` is set to that of the most recently repeated field or else defaults to its parent's level (Lines 9-13). The procedure is invoked recursively on nested records (Line 18).
[0135] The document above referenced `FieldWriters` accumulating levels and propagating them lazily to lower-level writers. This may be performed by each non-leaf writer keeping a sequence of (repetition, definition) levels. Each writer also has a `version` number associated with it. Simply stated, a writer version is incremented by one whenever a level is added. It is sufficient for children to remember the last parent's version they synced. If a child writer ever gets its own (non-null) value, it synchronizes its state with the parent by fetching new levels, and only then adds the new data.
[0136] Because input data may have thousands of fields and millions records, it may not be feasible to store all levels in memory. Some levels may be temporarily stored in a file on disk. For a lossless encoding of empty (sub)records, non-atomic fields (such as Name.Language in FIG. 2) may need to have column stripes of their own, containing only levels but no non-NULL values.
Section 4.3: Record Assembly
[0137] Assembling records (e.g., records `r1` and `r2`) from columnar data efficiently is critical for record-oriented data processing tools (e.g., Map Reduce). Given a subset of fields, a goal is to reconstruct the original records as if they contained just the selected fields, with all other fields stripped away. The key idea is, the next repetition level is looked at to decide what next reader to use. The FSM is traversed from the start to end state once for each record.
[0138] FIG. 5 shows an FSM that reconstructs the complete records in our running example using as input the blocks described in Section 4.1. In this example, the nodes are labeled with fields and the edges are labeled with repetition levels. The start state is `Docld.` Once a `Docld` value is read, the FSM transitions to the `Links.Backward` state. After all repeated `Backward` values have been drained, the FSM jumps to `Links.Forward,` etc.
[0139] To sketch how FSM transitions are constructed, let `l` be the next repetition level returned by the current field reader for field `f.` Starting at `f` in the schema tree (e.g., the schema in FIG. 2), its ancestor is found that repeats at level `l` and select the first leaf field `n` inside that ancestor. This provides an FSM transition (`f`; `l`)→n. For example, let `l`=1 be the next repetition level read by `f`=`Name.Language.Country.` Its ancestor with repetition level `1` is Name, whose first leaf field is `n`=`Name.Url.`
[0140] If only a subset of fields need to be retrieved, a simpler FSM that is cheaper to execute may be constructed. FIG. 6 depicts an FSM for reading the fields `Docld` and `Name.Language.Country.` The figure shows the output records `s1` and `s2` produced by the automaton. Notice that the encoding and the assembly algorithm preserve the enclosing structure of the field `Country.` This may be important for applications that need to access, e.g., the Country appearing in the first Language of the second Name. In XPath, this may correspond to the ability to evaluate expressions like /Name[2]/Language[1]/Country.
[0141] Construct FSM Procedure. FIG. 7 shows an algorithm for constructing a finite-state machine that performs record assembly. The algorithm takes as input the fields that should be populated in the records, in the order in which they appear in the schema. The algorithm uses a concept of a `common repetition level` of two fields, which is the repetition level of their lowest common ancestor. For example, the common repetition level of `Links.Backward` and `Links.Forward` equals 1. The second concept is that of a `barrier`, which is the next field in the sequence after the current one. The intuition is that each field is attempted to be processed one by one until the barrier is hit and requires a jump to a previously seen field.
[0142] The algorithm consists of three steps. In Step 1 (Lines 6-10), the common repetition levels are processed backwards. These are guaranteed to be non-increasing. For each repetition level encountered, the left-most field in the sequence is picked--that is the field that is to be transitioned to when that repetition level is returned by a `FieldReader.` In Step 2, the gaps are filled (Lines 11-14). The gaps arise because not all repetition levels are present in the common repetition levels computed at Line 8. In Step 3 (Lines 15-17), transitions for all levels are set that are equal to or below the barrier level to jump to the barrier field. If a `FieldReader` produces such a level, the nested record may continue to be constructed and there may be no need to bounce off the barrier.
[0143] Assemble Record Procedure. An Assemble Record procedure (illustrated in FIG. 8) takes as input a set of `FieldReaders` and (implicitly) the FSM with state transitions between the readers. In other words, the algorithm operates on an FSM and columnar data and outputs constructed records. Variable reader holds the current `FieldReader` in the main routine (Line 4). Variable Reader holds the last reader whose value is appended to the record and is available to all three procedures shown in FIG. 7. The main while-loop is at Line 5. The next value is fetched from the current reader. If the value is not NULL, which is determined by looking at its definition level, the record being assembled is synchronized to the record structure of the current reader in the method `MoveToLevel,` and the field value is appended to the record. Otherwise, the record structure may be adjusted without appending any value--which may be done if empty records are present. On Line 12, a `full definition level` is used. Recall that the definition level factors out required fields (only repeated and optional fields are counted). Full definition level takes all fields into account.
[0144] Procedure `MoveToLevel` transitions the record from the state of the `lastReader` to that of the `nextReader` (see Line 22). For example, suppose the `lastReader` corresponds to `Links. Backward` in FIG. 2 and `nextReader` is `Name.Language.Code.` The method ends the nested record Links and starts new records Name and Language, in that order. Procedure `ReturnsToLevel` (Line 30) is a counterpart of `MoveToLevel` that only ends current records without starting any new ones.
[0145] In their on-the-wire representation, records are laid out as pairs of a field identifier followed by a field value. Nested records can be thought of as having an `opening tag` and a `closing tag`, similar to XML (actual binary encoding may differ). A description of `starting` a record refers to writing opening tags, while `ending` a record refers to writing closing tags.
Section 5: Query Language
[0146] The described system may employ a query language that is based on SQL and is designed to be efficiently implementable on columnar nested storage. Aspects of the query language are described herein. Each SQL-like statement (and algebraic operators it translates to) takes as input one or multiple nested tables (e.g., a set of compressed blocks of columnar data that represents a table, as described in Section 4.1) and their schemas, and produces a nested table (e.g., a modified instance of the columnar data) and its output schema. FIG. 9 depicts a sample query that performs projection, selection, and within-record aggregation. The query is evaluated over the table t={r1, r2} from FIG. 2. The fields are referenced using path expressions. The query produces a nested result although no record constructors are present in the query.
[0147].
[0148] The COUNT expression illustrates within-record aggregation. The aggregation is done WITHIN each `Name` subrecord, and emits the number of occurrences of `Name.Language.Code` for each `Name` as a non-negative 64-bit integer (uint64). Thus, the WITHIN statement enables intra-row aggregation. In other words, records of the same name may be aggregated in a same record or beneath a same child. In contrast, SQL, which may not be able to operate on nested data, may be unable to operate on intra-row records.
[0149] The language supports nested subqueries, inter and intra-record aggregation, top-k, joins, user-defined functions, etc. Some of these features are discussed in the experimental data section. As one additional example, the described query language includes an OMIT IF statement that can filter an intra-row group of values. For example, each of thousands of records may include several repeated `Cost` fields that each include a numerical value. An user of the query language may want to throw out all records where a sum of the values in the fields exceeds the number `20.` Thus, the user may employ an OMIT IF statement to generate a list of the records where the summed `Cost` in each record is twenty or less.
Section 6: Query Execution
[0150] Tree Architecture. The described system uses a multi-level serving tree to execute queries (see FIG. 10). A root server receives incoming queries, reads metadata from the tables, and routes the queries to the next level in the serving tree. The leaf servers communicate with the storage layer or access the data on local disk. Many of the queries that operate in the described system are single-scan aggregations; therefore, this document focuses on explaining those and uses them for experiments in the next section. Consider a simple aggregation query below:
[0151] Select A, Count(B) from T Group by A
[0152] When the root server receives the above query, it determines all tablets, i.e., horizontal partitions of the table, that comprise the table `T` and rewrites the query as follows:
[0153] Select A, Sum(c) from (Rll Union All . . . Rnl) Group by A
[0154] Tables Rll UNION ALL . . . Rnl are the results of queries sent to the nodes 1, . . . , n at level 1 of the serving tree:
[0155] Ril=Select A, Count(B) as c from Til Group by A
[0156] Til is a disjoint partition of tablets in `T` processed by server T.
[0157] Query Dispatcher. The described system is a multi-user system, e.g., several queries may be executed simultaneously. A query dispatcher schedules queries based on their priorities and balances the load. Another role is to provide fault tolerance when one server becomes much slower than others or a tablet replica becomes unreachable.
[0158] The amount of data processed in each query is often larger than the number of processing units available for execution, which are called, the system reschedules the tablet on another server. Some tablets may need to be redispatched multiple times.
[0159].
[0160] The query dispatcher honors a parameter that specifies the minimum percentage of tablets that must be scanned before returning a result. As described below, setting such parameter to a lower value (e.g., 98% instead of 100%) can often speed up execution significantly, especially when using smaller replication factors.
[0161] Each server may have an internal execution tree, as depicted on the right-hand side of FIG. 7. The internal tree corresponds to a physical query execution plan, including evaluation of scalar expressions. Optimized, type-specific code is generated for most scalar functions. A basic execution plan consists of a set of iterators that scan input columns in lockstep and emit results of aggregates and scalar functions annotated with the correct repetition and definition levels, bypassing record assembly entirely during query execution.
[0162] Some queries by the described system, such as top-k and count-distinct, return approximate results using well-known single-scan algorithms. See Hailing Yu, Hua-gang Li, Ping Wu, Divyakant Agrawal, Amr El Abbadi, "Efficient processing of distributed top-k queries", DEXA 2005, pp. 65-74.
Section 7: Experimental Data
[0163] This section presents an experimental evaluation of the described system on several datasets, and examines the effectiveness of columnar storage for nested data. The properties of the datasets used in the study are summarized in FIG. 11. In uncompressed, non-replicated form the datasets occupy about a petabyte of space. All tables are three-way replicated, except one two-way replicated table, and contain from 100K to 800K tablets of varying sizes. This section begins by examining the basic data access characteristics on a single machine, then show how columnar storage benefits MapReduce execution, and finally focus on the described system's performance. The experiments were conducted on system instances running in two data centers next to many other applications, during regular business operation. Table and field names used below are anonymized.
[0164] Local Disk. In the first experiment, performance tradeoffs of columnar vs. record-oriented storage were examined by scanning a 1 GB fragment of table T1 containing about 300K rows (see FIG. 12). The data is stored on a local disk and takes about 375 MB in compressed columnar representation. The record-oriented format uses heavier compression yet yields about the same size on disk. The experiment was done on a dual-core Intel machine with a disk providing 70 MB/s read bandwidth. All reported times are cold; OS cache was flushed prior to each scan.
[0165] FIG. 12 the columns. Graph (c) shows how long it takes to parse the records into strongly typed C++ data structures.
[0166] Graphs (d)-(e) depict the time for accessing the data on record-oriented storage. Graph (d) shows reading and decompression time. A bulk of the time is spent in decompression; in fact, the compressed data can be read from the disk in about half the time. As Graph (e) indicates, parsing adds another 50% on top of reading and decompression time. These costs are paid for all fields, including the ones that are not needed.
[0167] When few columns are read, the gains of columnar representation may be about an order of magnitude. Retrieval time for columnar nested data may grow linearly with the number of fields. Record assembly and parsing may be expensive, each potentially doubling the execution time. Similar trends were observed on other datasets. A natural question to ask is where the top and bottom graphs cross, i.e., record-wise storage starts outperforming columnar storage. In experience, the crossover point may lie at dozens of fields but varies across datasets and depends on whether or not record assembly is required.
[0168] Map Reduce and the Described System. Next an execution of MapReduce and the described system are illustrated on columnar vs. record-oriented data. In this case, a single field is accessed and the performance gains are the most pronounced. Execution times for multiple columns can be extrapolated using the results of FIG. 12. In this experiment, the average number of terms in a field `txtField` of table `T1` is counted. MapReduce execution is done using the following Sawzall program:
[0169] numRecs: table sum of int;
[0170] numWords: table sum of int;
[0171] emit numRecs<-1;
[0172] emit numWords<-CountWords(input.txtField);
[0173]:
[0174] Q1: Select Sum(CountWords(textile))/COUNT(*) FROM T1
[0175] FIG. 13 shows the execution times of two MapReduce jobs and the described system on a logarithmic scale. Both MapReduce jobs are run on 3000 workers (e.g., servers). Similarly, a 3000-node instance of the present system is used to execute Query Q1. The described system and MapReduce-on-columns read about 0.5 TB of compressed columnar data vs. 87 TB read by MapReduce-on-records. As FIG. 12 illustrates, MapReduce gains an order of magnitude in efficiency by switching from record-oriented to columnar storage (from hours to minutes). Another order of magnitude is achieved by using the described system (going from minutes to seconds).
[0176] Serving tree topology. In the next experiment, the impact of the serving tree depth on query execution times is illustrated. Two GROUP BY queries are performed on Table T2, each executed using a single scan over the data. Table T2 contains 24 billion nested records. Each record has a repeated field item containing a numeric amount. The field item.amount repeats about 40 billion times in the dataset. The first query sums up the item amount by country:
[0177] Q2: Select country, Sum(item.amount) FROM T2
[0178] Group by Country
[0179] It returns a few hundred records and reads roughly 60 GB of compressed data from disk. The next query performs a GROUP BY on a text field domain with a selection condition. It reads about 180 GB and produces around 1.1 million distinct domains:
[0180] Q3: Select Domain, Sum(item.amount) FROM T2
[0181] Where domain CONTAINS `.net`
[0182] Group by Domain
[0183] FIG. 14 shows the execution times for each query as a function of the server topology. In each topology, the number of leaf servers is kept constant at 2900 so that the same cumulative scan speed may be assumed. In the 2-level topology (1:2900), a single root server communicates directly with the leaf servers. For 3 levels, a 1:100:2900 setup is used, i.e., an extra level of 100 intermediate servers. The 4-level topology is 1:10:100:2900.
[0184] Query Q2 runs in 3 seconds when 3 levels are used in the serving tree and does not benefit much from an extra level. In contrast, the execution time of Q3 is halved due to increased parallelism. At 2 levels, Q3 is off the chart, as the root server needed to aggregate near-sequentially the results received from thousands of nodes. This experiment illustrates how aggregations returning many groups may benefit from multi-level serving trees.
[0185] Per-tablet Histograms. The FIG. 15 shows how fast tablets get processed by the leaf servers for a specific run of Q2 and Q3. The time is measured starting at the point when a tablet got scheduled for execution in an available slot, i.e., excludes the time spent waiting in the job queue. This measurement methodology factors out the effects of other queries that are executing simultaneously. The area under each histogram corresponds to 100%. As FIG. 15 indicates, 99% of Q2 (or Q3) tablets are processed under one second (or two seconds).
[0186] Within-record Aggregation. As another experiment, the performance of Query Q4 is examined when GB (out of 70 TB) are read from disk and the query completes in 15 seconds. Without support for nesting, running this query on T3 would be expensive.
[0187] Q4: Select Count(c1>c2) from [0188] (SELECT SUM(a.b.c.d) WITHIN RECORD AS c1, SUM(a.b.p.q.r) WITHIN RECORD AS c2 FROM T3)
[0189] Scalability. The following experiment illustrates the scalability of the system on a trillion-record table. Query Q5 shown below selects top-20 aid's and their number of occurrences in Table T4. The query scans 4.2 TB of compressed data.
[0190] Q5: Select Top(Aid, 20), Count(*) from T4
[0191] Where Bid={value1} and cid={value2}
[0192] The query was executed using four configurations of the system, ranging from 1000 to 4000 nodes. The execution times are in FIG..
[0193] Stragglers. Stragglers may be tasks (e.g., processing a tablet) that are not performed, for example, because the machine performing the task has an operational problem or the machine is not being aggressive enough in handling the task given higher-priority tasks. Query Q6 below is run on a trillion-row table T5. In contrast to the other datasets, T5 is two-way replicated. Hence, the likelihood of stragglers slowing the execution is higher since there are fewer opportunities to reschedule the work.
[0194] Q6: Select Count(Distinct a) from T5
[0195] Query Q6 reads over 1 TB of compressed data. The compression ratio for the retrieved field is about 10. As indicated in FIG. 17, the processing time for 99% of the tablets is below 5 seconds per tablet per slot. However, a small fraction of the tablets take a lot longer, slowing down the query response time from less than a minute to several minutes, when executed on a 2500 node system. The next section summarizes experimental findings.
Section 8: Observations
[0196] FIG. 18 shows the query response time distribution in a typical monthly workload of the described system, on a logarithmic scale. As FIG. 18 indicates, most queries are processed under 10 seconds, well within the interactive range. Some queries have achieved a scan throughput close to 100 billion records per second in a busy cluster, and even higher on dedicated machines. The experimental data presented above suggests the following observations: [0197] Scan-based queries can be executed at interactive speeds on disk-resident datasets of numerous records; [0198] Near-linear scalability in the number of columns and servers may be achievable for systems containing thousands of nodes; [0199] MapReduce can benefit from columnar storage just like a DBMS; [0200] Record assembly and parsing are expensive. Software layers (beyond the query processing layer) may be optimized to directly consume column-oriented data; [0201] MapReduce and query processing can be used in a complementary fashion, one layer's output can feed another's input; [0202] In a multi-user environment, a larger system can benefit from economies of scale while offering a qualitatively better user experience; [0203] If trading speed against accuracy is acceptable, a query can be terminated much earlier and yet see most of the data; and [0204] The bulk of a web-scale dataset can be scanned fast, although getting to the last few percent may increase the amount of processing time.
[0205] FIG. 19 is a block diagram of a system for generating and processing columnar storage representations of nested records. The record generator 1904 generates records of nested data from data sources 1920 and a schema 1902. The column generator 1908 receives as input the records 1906 and the schema 1902 and outputs column stripes that represent the data in the records 1906, but in a columnar format. The columnar data 1910 may be queried in situ by the querying system 1912 in order to produce different sets of output columns 1914. The columnar data 1910 may also be assembled back into record form by the record assembler 1916. The records 1918 that are output by the record assembler may each include a sub-set of fields from the original records in the collection 1906. The output records 1918 may be operated on by a record-based data analysis program (e.g., MapReduce).
[0206] More specifically, the data sources 1920 may include substantially unstructured data. Substantially unstructured indicates that the data may include elements that denote structure, but the entire spectrum of information may not be similarly structured. As an illustration, the data sources 1920 may include the source code for each of millions of websites. Although each website includes some degree of structure, the content of each website is not generated based on a common schema. Standards may generally govern a format of the site, but content and placement of fields is not specified among each and every website by a single schema. In some examples, the information in data sources 1920 is not stored in the common storage layer 1922, but is pulled directly from external sources on the internet.
[0207] The schema 1902 defines a common structuring for information that may be contained in the data sources. As described earlier in this document, the schema 1902 can require certain fields of information and may permit other fields of information to be stored as optional.
[0208] The record generator 1904 receives as input the schema 1902 and information from the data sources 1920. The record generator 1904 takes the information from the data sources 1920 and structures all or portions of the information into individual instances of records that comply with the schema 1902. For example, while the data sources 1920 may include substantially unstructured data from web pages, the record generator 1904 may select pieces of information from each web page to include for particular records 1906.
[0209] Thus, each of the records 1906 may include data that is structured according to the schema 1902. The structured data may include fields, which may denote a semantics of data values and a structural relationship of the data values. Accordingly, the schema may be referenced to obtain additional definition information for the data value (e.g., what the digitally stored data value represents in the real world or on a web page and relationships to other values).
[0210] Each record 1906 may include nested fields and data values. A nested record may include more than one field of the same name or path. The fields with the same name or path, however, can be structurally located in different locations in a particular record. For example, a single field that is defined by the schema may be able to repeat multiple times. Further, fields may have children fields (i.e., nested fields). Thus, at a top level of a record a particular field may repeat, and each repetition of the field may or may not include a particular child field. In other words, the record may include instances of the child field in some portions of the record, but not in other portions of the records.
[0211] The collection of records 1906 may be translated into columnar data 1910 to speed up processing of information in the records. For example, if the amount of records in the collection 1906 numbers in the billions, and each record could include hundreds of different fields, an analysis of the records may be time-intensive where information on a small number of fields is desired. This is because each record in the collection 1906 is stored with other information from the record. That is, each record is grouped together in a consecutive portion of memory (e.g., as illustrated in the `record-oriented` depiction of nested data in FIG. 1).
[0212] In contrast, columnar data 1910 includes columns that each store information for a single field in the schema 1902 (e.g., as illustrated in the `column-oriented` depiction of nested data in FIG. 1). Thus, if the field is a byte long, the column for the field may be on the order of billions of bytes (e.g., one byte for each record) as opposed to billions of records (e.g., where each record may be a megabyte in size). The operations of the column generator 1908 are described in more detail in Section 4.2 "Splitting Records into Columns." The storage format for the columnar data 1910 is described in more detail in Section 4.1 "Repetition and Definition Levels."
[0213] The columnar data 1910 may be queried directly using the querying system 1912. In other words, the columnar data 1910 may be queried without loading the data into a database. The querying system, when executing a query, may receive as an input a table of columnar data. In some examples, the querying system also receives as input the schema 1902. The columnar stripes may be stored together with the schema to make the data self-describing. The querying system allows operations to be performed on the columnar data in order to generate columns of output information 1914. The output columns 1914 may include a subset of the values represented in the columnar data 1910, as determined by a particular query. In some examples, the querying system outputs records 1918 instead of, or in addition to, the columns 1914.
[0214] For example, the querying system 1912 may receive a first query and, in response, may parse through select columns of data and generate a set of output columns that provides a title of all web pages that have one or more videos and a number of the videos for each web page. The querying system may receive a second query and in response output a second set of output columns that provides a URL of every web page that was generated within the last fifteen minutes. Other information from the columns 1910 may not be included in a set of output columns that corresponds to a particular query 1914.
[0215] Data that is stored as columnar data 1910 may need to be accessed by an analytical service that does not operate on columnar data but operates on records. Thus, the record assembler 1916 may receive as input the columnar data and assemble records from the columnar data. The process of assembling records is described in more detail in Section 4.3 "Record Assembly."
[0216] Although the records may already be available in the collection 1906, the record assembler 1916 enables generating a set of records that includes a subset of the fields of the records in the collection 1906. For example, the records in the collection may include thousands of different fields. A user may want to run a record-oriented analysis program that only requires knowledge from two of the fields, but for all of the records. Thus, the record assembler 1916 may generate a set of records that only includes information on the requested fields. This way, multiple sets of output records 1918 can be developed for different analysis or for different analysis programs. An analysis on smaller records may be faster than an analysis that must traverse the larger records that may be found in collection 1906.
[0217] The above description of the operation of the system 1900 illustrates an example where the collection of records 1906 includes records that are formatted in accordance with the schema 1902, and where the columnar data 1910 is generated from this single set of similarly-structured data. In various examples, multiple schemas 1902 may be used to generate a collection of records that includes many sets of differently structured records 1906. Each record, however, may identify in a header the type of schema that was used in the record's generation. Similarly, a column stripe may be generated for each field in each of many sets of similarly-structured records. Each column stripe may indicate not only the name of the field, but also the schema from which the columnar data is associated (i.e., the schema used to format the records from which the columnar data was generated).
[0218] FIG. 20 is a flow chart of an example process for generating columnar data. The process may be performed by components of the system 1900.
[0219] In box 2002, a set of records is generated. The generation of the records may be performed by the record generator 1904. Unstructured data (e.g., from data sources 1920) may be compiled into a standardized record format that is defined by schema 1902. The records may be stored in the collection 1906.
[0220] In box 2004, the records in the collection 1906 are accessed. For example, the column generator 1908 receives as input the data from the collection of records 1906.
[0221] In box 2006, a determination is made whether a column stripe is to be generated for an additional field. For example, a stripe is to be generated for each field in the set of records that are stored in the collection 1906 (and thus each record in the schema 1902 or a subset thereof). In this illustration, no stripes have been made so far, and thus there are fields for which a stripe is to be generated. Accordingly, the process proceeds to box 2008 in order to perform operations for a particular field. If all stripes had been generated (e.g., a stripe had been generated for every field in the collection of records 1906), the process may end.
[0222] In box 2008, a list of values for the particular is generated. For example, each of the records may be traversed and a list of values for the particular field is generated.
[0223] In box 2010, repetition levels for the particular field are generated. For example, the column generator 1908 may determine a repetition level for each of the values in the list by determining a most recently repeated field in the path for the field.
[0224] In box 2012, definition levels for the particular field are generated. For example, the column generator 1908 may determine a definition level for each value (including values that are `missing,` as described in more detail above).
[0225] In box 2014, a columnar stripe is assembled for the particular field. In various examples, the repetition and definition levels are placed in paired groupings in the header of the stripe. The list of values may be placed in the body of the stripe.
[0226] In box 2016, the columnar stripe is broken into blocks that may be compressed. Each block may include a set of values and their corresponding repetition and definition levels. Subsequently, a determination in box 2006 of whether columnar stripes are to be generated for additional fields is performed. If no additional columnar stripes are to be generated, the process ends.
[0227] The process depicted in FIG. 20 is an example process for generating columnar stripes. Variations on the process are contemplated. For example, the operations of the boxes may not be performed sequentially as depicted in the flowchart. Stripes for multiple fields may be generated at a single time. The repetition level and definition level may be generated as each value is obtained from a record. The columnar stripe may not be generated as a whole. Instead, each block may be generated from the stripe and independently compressed. Thus, the flowchart may represent a conceptual mechanism for understanding the generation of stripes, but is not intended to be limiting. A process for generating columnar data is depicted in the algorithm of FIG. 4, which may not correspond to the operations described in relation to FIG. 20.
[0228] FIG. 21 is a block diagram illustrating an example of a system 2100 that implements a web service for data storage and processing. In general, the columnar data processing system 2130 in the lower-right side of FIG. 21 represents components of the system illustrated in FIG. 19 (which illustrates a block diagram of a system for generating and processing columnar storage representations of nested records). As described in more detail throughout this document, the columnar data processing system 2130 may execute efficient queries on columnar data that is stored in repository 2132. The remaining components of the data storage and processing service 2102 support a web service that stores data, allows external users (e.g., individuals accessing the service 2102 over the internet) to import that data into tables, and, from the user's perspective, perform queries over those tables. The data underlying those tables may be stored as columnar data and the queries over the tables may be implemented by the querying capabilities of the columnar data processing system 2130. These external users use Application Programming Interfaces (API) 2104, 2134, and 2124 to upload data to the data storage and processing service 2102, import select portions of the uploaded data into tables, and perform queries on the tables.
[0229] External users may use the Objects API 2104 to upload data into the object storage 2106, potentially aggregating in a single service data that streams regularly from many computing devices. External users may define tables and transfer the data that is located in the object storage 2106 to the tables. The transfer can be performed upon user request or automatically by the service 2102 as new data is uploaded to the object storage 2106. The bulk data that is referenced in tables may be stored as columnar data in storage 2132, while the metadata for the tables may be stored separately in the table metadata storage 2120. The external users may run efficient queries on the tables using the Query API 2124. The queries on the tables may be implemented as queries on the underlying columnar data in storage 2132, and the processing of the queries on the columnar data in storage 2132 may be performed by the columnar data processing system 2130, as described throughout this document.
[0230] The object storage 2106 that is provided to external users through the Objects API 2104 is described in detail first. The object storage 2106 hosts data that may be accessible through the Objects API 2104 to numerous external users. As an illustration, more and more log data that is generated by websites is being hosted in the cloud by remote services that specialize in data hosting, as opposed to the websites themselves storing the log files on their own networks. Such cloud-based storage may be particularly beneficial when data that is continuously generated by many geographically dispersed computers needs to be aggregated in one place, available to multiple different users, and occasionally analyzed.
[0231] The object storage 2106 may include objects from a variety of users that are grouped into buckets. Each bucket may be a flat container that groups objects and provides a unique namespace for the group of objects. An external user may own a collection of buckets and assign access settings to each bucket. Thus, objects in one bucket may be private to a few users while objects in another bucket may be publicly accessible on the internet. The buckets may have a universally unique name among all buckets owned by external users. In some examples, the buckets exist in a flat namespace such that the buckets are not nestable.
[0232] Each object may be stored as an opaque collection of bytes. In other words, the object storage 2106 may receive through the Objects API 2104 different types of data, but may treat the received data as a chunk of data without regard to the format of the data. Each object may have corresponding metadata that is stored in a separate table or database. Each object may be assigned to one bucket, and each object in a bucket may have a name that is unique to the bucket. Thus each object may have a globally unique name when addressed with reference to the object's parent bucket. Like buckets, each object may have its own access control list, enabling sharing data over a network (e.g., the internet) between a variety of users with different permissions.
[0233] The interface provided by the Objects API 2104 to exchange data may be a RESTful (REpresentational State Transfer) HTTP interface that employs industry standard, or proprietary, protocols. For example, external users may employ GET, PUT, POST, HEAD, and DELETE actions to interact with objects that are stored in the object storage 2106. The Objects API 2104 provides a sequential interface for writing and reading data to objects in the object storage 2106. In some examples, the Objects API 2104 provides read-only access to some of the objects. Thus, a user may delete and replace objects, but may not incrementally modify objects. In some examples, the data storage and processing service 2102 may not be configured for external customers to perform SQL-like queries on the objects directly. The data in the objects may be first placed into structured tables before such queries are performed.
[0234] As an illustration, HTTP API requests may be received at the frontend server 2126 from a remote computerized device that is associated with an external user. The frontend server 2126 forwards the request to an API collection implementor 2126. The API collection implementor stores API libraries, processes the request based on the stored libraries, and generates corresponding requests to the appropriate components of the data storage and processing service 2102. Because API requests for the objects API 2104 pertain to object storage 2106, the API collection implementor 2116 forwards a request to the object storage 2106.
[0235] The data storage and processing service 2102 provides the ability to transfer data that is stored in objects into tables and run efficient queries on the tables using the columnar data processing system 2130. For example, users can append data to tables, create new tables, and manage sharing permissions for tables. The data in the tables may be stored as columnar data in the columnar data storage 2132. Accordingly, when data is placed in a table, the data storage and processing service 2102 transfers data from the object storage 2106 to the columnar data storage 2132. The import job manager 2108 manages the process of transferring the data and performs conversion operations on the data.
[0236] Each table represents a structured data set that a user may query through the Query API 2124. Users can create tables, import data into tables, share tables, run queries over the tables, and use the tables in data analysis pipelines. The external user exposure to a table may be an object that is stored in the object storage 2106 as a delegate object. A delegate object may be an object that provides an interface to a set of data and operations that are not stored in the object storage 2106. In other words, delegate objects may allow tables to be mapped into the namespace for the object storage 2106. Thus, each table's name may reside in the global object namespace and may be unique. A delegate object for a table may hold metadata that identifies the owner of the table, the access control list for the table, and the table identifier (which links the delegate object to additional table metadata, and is described in more detail below).
[0237] Thus, in one implementation, an external user sees tables as objects residing within buckets. The user may view a list of all tables in a bucket and may delete a table by deleting the corresponding delegate object, much in the same way that the user may view a list of objects and delete objects. When an external user makes a request that references a table via its object name, a reference to underlying table data is extracted from the delegate object and is used to service the request. For example, a delete operation on a table may trigger cleanup operations on the corresponding metadata in the table metadata storage 2120 and underlying columnar data in the columnar data storage 2132.
[0238] A table is created in response to a request through the Table API 2134. The table management system 2118 may create a table at a key of a database in the table metadata storage 2120, and then create a delegate object in the object storage 2106 to reference the key. The table metadata storage 2120 may hold metadata for the tables that are referenced by delegate objects. For example, a table identifier in a delegate object references a key in a row of the table metadata storage 2120. The table metadata storage 2120 stores, for the table and under the key, any combination of: (1) the table identifier, (2) a table revision number, (3) a table name, (4) a data reference set, (5) a schema description, and (6) data statistics.
[0239] The table name may be a back pointer to one or more buckets and objects that the table is associated with. Storing the table name may facilitate garbage collection and help avoid conflicts if the table is deleted and a new table with the same external (object) name is later created. The data reference set may include path references to the columnar data 2132 that backs the table (e.g., that stores the bulk data for the tables). The schema description may allow for efficient schema validation during data management operations. The data statistics may identify information about the table, for example, a number of rows, a size of data referenced by the table, and a last updated timestamp.
[0240] In some examples, a table is filled with data from objects in the object storage 2106 in response to a demand by a user (e.g., a Table API call). For example, the import job manager 2108 may receive an ad-hoc request from a data owner to import the data from a set of objects from the data storage 2106 into a table. In other examples, a data owner may generate a job that is executed by the import job manager 2108, and that establishes a continuous import service that takes objects that are newly placed in a bucket and "auto-imports" the data in the objects into a table. After the data from the objects is imported into the table, the objects may be automatically deleted without user input.
[0241] The import job manager 2108 receives requests to import data from an object into a table, and in response performs several operations to transfer data to the columnar data storage 2132. The job manager 2108 creates job metadata to track the import and launches a coordinator 2110. The job metadata is stored in the import job metadata storage 2122.
[0242] In particular, the import job manager 2118 may aggregate the content of objects, perform data format transformations, shard the data into appropriately sized chunks, move the data into a different storage layer, and place the chucks of data in the columnar data storage 2132 for access by the columnar data processing system 2130. In some examples, the import job manager 2108 transforms the object data into columnar data. In other examples, the import job manager 2108 places non-columnar chunks of data in the columnar data storage 2132, and the columnar data processing system 2130 converts the non-columnar chunks of data to a columnar format.
[0243] The coordinator 2110 is invoked by the import job manager 2108 to analyze an import job and launch an appropriate number of workers to process the data in a reasonable amount of time. The coordinator 2110 analyzes the input data objects and decides how to assign the data objects among individual workers 2112 that process the input data objects. The coordinator 2110 spawns individual worker instances and observes worker progress. The coordinator 2110 ensures that the data handled by each worker is not too small or large.
[0244] In some circumstances, use of a single coordinator and many workers may enable the import job manager 2110 to scale with data size and a number of input data objects. If a failure is detected or a worker is inefficient, the worker may be restarted or the worker's tasks may be reassigned. Each worker instance 2112 may sequentially read input data objects, perform appropriate format conversions, and store the data in sharded bundles of columnar data 2132. In some examples, worker instances are assigned to run in the same clusters where the input data is located (because cross-datacenter traffic can be inefficient and expensive).
[0245] The workers convert data from a given set of inputs into a sharded set of columnar data bundles, and appends the bundles to the appropriate table. Input data may be any schematized data format that the system understands. Input data may be text or binary form, and the schema may be incorporated in the data format or specified along with the data. Example input data may be: (1) a record data type (a self-contained and self-describing structure for record-stored data), (2) a column data type (a self-contained and self-describing structure for column-stored data), (3) text based formats for which the data storage and processing service 2102 knows the schema (field separated or fixed field length formats such as Apache, AppEngine, or W3C logs), or (4) text based formats that can be described by name/type value pairs (field separated or fixed field length, and the user specifies the name/type pairs and separators or field sizes).
[0246] The coalescer and garbage collector 2114 may periodically scan tables for issues to fix. The coalescer may monitor contents of the columnar data storage 2132 and detect columnar data bundles that are too small and may be coalesced into larger bundles. The garbage collector detects columnar data bundles that are not referenced by any tables and may be deleted. Similarly, dangling table metadata may be cleaned up, for example, when a table is generated but the table creation process fails before a corresponding delegate object is generated in the object storage 2106.
[0247] Once a table has been created and data has been imported into the table (e.g., by generating table metadata, generating the delegate object in the data storage 2106, and generating corresponding columnar data 2132), user queries may be run on the tables. The queries may be SQL-like and may be received from external users through the Query API 2124. The frontend server 2126 receives the Query API requests and forwards the requests to the API collection implementor 2116, which passes the queries to the query manager 2128.
[0248] The query manager 2128 takes SQL-like queries and an authenticated token for the external user, verifies that the external user can access the tables referenced in the query, and hands the request off to the columnar data processing system 2130 and table management system 2118 for executing the query. As described earlier in the document, the columnar data processing system 2130 may query columnar data 2132 and output result data (e.g., columns of result data). The result data may be placed in a table defined by the query, returned to the external user in a format defined by data format templates, or placed in an object defined in the API call.
[0249] The API collection implementor 2116 handles API calls through the Objects API 2104, Table API 2134, and the Query API 2124. Example API functions are detailed below.
Objects API
[0250] ObjectStorage.DeleteObject(string object): Delete an object or a table (e.g., by causing the delegate object to invoke operations for removing the table).
[0251] ObjectStorage.ListObjects(string bucket): Lists the objects and tables in a bucket. Objects may include a tag that allows the objects to be labeled as tables or other types of data. Thus, the described ListObjects API function may include an optional parameter that identifies the types of objects to list within a bucket.
[0252] ObjectStorage.PutObject(string object): Pushes data from an external user's computerized device to the object storage 2106.
[0253] ObjectStorage.GetObject(string object): Returns to an external user's computerized device an object from the object storage 2106. When run on a tableName, the GetObject API call may return: (1) the equivalent of a "select*from tableName" statement; (2) a description of the contents of the table, the table references, and the records added to the table; or (3) metadata about the table.
Table API
[0254] CreateTable(string bucket, string tableName, ACLs): Creates tables in object storage 2106 with the specified name tableName, with optional access control lists applied to the table.
[0255] ImportRecords (string sourceBucket, string sourceObject, string dataFormat, string destBucket, string destTableName, ImportToken importToken, Boolean extendSchema=true): Initiates an asynchronous import of data into destTableName using dataFormat to determine how to read the content of sourceObject. If extendSchema is true, then the schema of the destTableName is extended to support all data imported. If extendSchema is false, then objects that include data records that do not match destTableName's existing schema, or the data records themselves, are ignored. ImportToken stores a pointer to the import job initiated by a call using the ImportRecords function.
[0256] GetImportStatus(ImportToken importToken): Returns the status of the import call associated with importToken. The returned status may state if the import is complete or in-progress, state a number of records imported, and state number of records with errors.
[0257] CreateContinuouslmportService(Bucket sourceBucket, FormatDescription sourceFormat, Object destinationTable, Object importServiceName): Starts an import task that watches sourceBucket for new objects. When the new objects are loaded into the sourceBucket (e.g., by external users), the task imports the data in the objects to destinationTable. The service sourceFormat attribute identifies the format of the objects in the sourceBucket. The import service has a name in the objects' namespace, providing external users the ability to list, delete, and perform authorization operations on import services.
[0258] GetContinuouslmportServiceStatus(Object importServiceName): Returns information describing if the importServiceName is complete or in progress. May also return any recent errors.
Query API
[0259] Query(String sqlQuery, ExportEnum exportAs): Executes a query sqlQuery and returns data in the format specified by exportAs. The format of the returned data may be csv, xml, json, and viz. In some examples, GViz SQL and Pig Latin may be used instead of SQL. The exportAs field may be optional if query sqlQuery defines a table to output the data to.
[0260] Query(String sqlQuery, ExportEnum exportAs, Object exportToObject): Stores the results of query sqlQuery in object exportToObject.
[0261] The collection of APIs may enable SQL-like summaries to be performed on large quantities of data that is imported into tables from the object storage 2106. The source objects in object storage 2106 may be aggregated from numerous web sources, each source having permission to place data in the same bucket of object storage 2106. Thus, the illustrated data storage and processing service 2102 can provide an aggregation and data import pipeline for the columnar data processing system 2130 described earlier in this document. The columnar data processing system 2130 can provide fast queries and aggregations of large datasets.
[0262] In addition to the SQL-like operations described earlier in this document, an external user may JOIN data in tables. Because the described columnar data processing system 2130 may be organized as a tree of serving shards, in which each shard communicates with its children and its parent, introducing a general JOIN may require adding a capability of communication between all pairs of leaf shards. The JOIN (local) function, which may work on input tables particular to a leaf, may work globally by first issuing a SHUFFLE command.
[0263] The SHUFFLE command ensures that tuples which should be joined are on the same shard. For example, a JOIN operation R JOIN S ON R.f1=S.f2 can be expressed by first shuffling R by f1 and S by f2. The semantics of the shuffle operation then guarantee that all tuples of R with a given value for f1 will be on one tablet, and all tuples of S which have the same value for f2 will be on a corresponding tablet. Thus, after the shuffles, a JOIN (local) may have the same effect as a global join.
[0264] In various examples, the data that is stored by the data storage and processing service 2102 is replicated among geographically dispersed server devices. For example, an object that is stored in object storage 2106 may be replicated among server devices in data centers that are hundreds of kilometers apart. Thus, a localized server failure, power outage, or natural disaster may not influence the availability of the object. Similarly, after the data in the object has been imported into a table, the columnar stripes that underlie the table and that reside in the columnar data storage 2132 may be replicated among geographically dispersed server devices. The table metadata that is stored in the table metadata storage 2120 may also be replicated among geographically dispersed server devices.
[0265] FIG. 22 is a flowchart showing an example of a process 2200 for performing data storage and processing. The process 2200 may be performed by the system illustrated in FIG. 21, and more particularly the data storage and processing service 2102.
[0266] In box 2202, a request to store data is received at a server system. For example, a server system that provides the data storage and processing service 2102 may implement an API that enables remote computing devices to upload data to the server system, for example, over the internet. The server system may receive a function call to upload data through the API and from a remote computing device. The function call may identify data to upload and a name for the data. The name for the data may identify a storage location for data (e.g., a bucket).
[0267] In some examples, the request may be received from a computing device that does not access the data storage and processing service 2102 over the internet. For example, a third party may physically ship one or more physical storage devices (e.g., CDs, DVDs, hard discs, or RAID enclosures) to a business entity that operates the data storage and processing service 2102. Employees of the business entity may load the data that is included in the physical storage device into the object storage 2106 using a computing device that is connected to the data storage and processing service 2102 over a local network. The local transfer of data to the object storage 2106 may not use the API.
[0268] In box 2204, the identified data is stored as an object in a repository at the server system. The repository may include a collection of "buckets" that are each configured to include one or more objects. Each bucket may have a name that is unique among the collection of buckets, and the objects in each bucket may have names that are unique to the bucket. Thus, each object may be addressable by a unique name path (e.g., bucketName.objectName). Each bucket may be owned by one or more external customers.
[0269] Storing the data (e.g., a record, collection of records, file, or collection of files) as an object may include determining that the remote device that is uploading the data is authorized to place objects in the identified bucket. For example, an external customer may create a bucket and assign specific user accounts as authorized to place data in the bucket and view the contents of the bucket. If a remote device logged in under one of the specific accounts requests to place data in the bucket, the request may be granted. Similar requests by non-authorized user accounts may be rejected.
[0270] In box 2206, a request is received to create a table. For example, the data storage and processing service 2102 may receive from a remote computing device an API function call requesting to create a table. A table may be a structured data set that a user may query. The request may define a name for the table, and define the fields for the table. For example, the request may include a schema that defines a structure for a type of record, and the table may be generated to store data for records of the type.
[0271] In box 2208, the table is created. For example, metadata for the table may be added under a row in a database. A delegate object that references the table row may be placed in the object repository. For example, if the API call requests to generate a table named bucketName.TableName, a delegate object that is named TableName may be placed in the bucket bucketName. The TableName delegate object may include an access control list for the table and a table identifier (e.g., an identifier of the database row that stores metadata for the table).
[0272] In box 2210, a request to import data in the object into the table is received. For example, the data storage and processing service 2102 may receive from a remote computing device an API function call requesting that data in the object be loaded into the table or appended to the end of a table that already includes data. In some examples, the request is received from a continuous import service. The continuous import service may periodically monitor a bucket and when the bucket includes new objects (e.g., when external customers place new objects in the bucket) the continuous import service requests that data in the new objects be appended to the table. In some examples, an API function call that establishes the continuous import service was received earlier. The customer-facing view of the continuous import service may be a delegate object.
[0273] In box 2212, the data in the object is converted into columnar format. For example, the object may include a set of records, and the records may be converted into a set of columnar stripes, where each stripe (or set of blocks of a stripe) describes a single attribute field of the records. The columnar stripes may include the repetition and definition levels described throughout this document.
[0274] In box 2214, the columnar data is stored in a repository. In some examples, the repository for the columnar data is different than the repository for the objects. For example, the repositories may be different storage layers implemented at a server system implementing the data storage and processing service 2102. Metadata that references the columnar data may be stored in the table database. Thus, a query of the table may include referencing the metadata in the table database to identify columnar data that corresponds to particular attribute fields for the data in the table.
[0275] In some examples, the request identifies several objects for which data is to be loaded into the table. The data content of the objects may be aggregated, data format transformations may be performed, the data may be sharded into appropriately sized chunks of columnar data, and the chucks of columnar data may be placed in the repository for columnar data.
[0276] In box 2216, a request is received to perform a query on the table. For example, the data storage and processing service 2102 may receive an API function call from a remote computing device requesting that a SQL-like query be run on the table. In some examples, the query operates on the table and one or more other tables. For example, the query may collect data having particular characteristics from each of two tables and place the aggregated data in a third table.
[0277] In box 2218, a determination is made whether the remote computing device requesting the query is authenticated to access the one or more tables specified in the query. For example, the delegate object for each table may have an access control list that identifies user accounts that may run queries on the table corresponding to the delegate object, delete the table, and add data to the table. If a remote computing device associated with a user account attempts to run a query on several tables, the data storage and processing service 2102 determines if the user account is authorized to query each of the tables.
[0278] In box 2220, a query is performed on columnar data. For example, queries are performed on the columnar data underlying the tables specified in the query request. A query manager 2128 may generate, based on the query received from the remote computing device, component queries that are performed on particular columns of data for the tables specified by the query. For example, the query may request data within a particular range for a single attribute in a table. The query manager 2128 may generate a component query on a collection of blocks of a columnar stripe that is associated with the single attribute, and may run other component queries on columnar stripes for other of the attributes in the query.
[0279] In box 2222, data is output based on the query. For example, the query may identify a table that the results of the query are to be placed in. Accordingly, the query manager 2128, table management system 2118, and columnar data processing system 2130 may place the result data in a table. Also, the query may identify one or more objects in which to place the results of the query. Thus, the results of the query may be placed in one or more objects in the object storage 2106. The API call requesting that the data be placed in the object may specify a data format for storage of the data in the object. Accordingly, outputting the data as one or more objects may include converting output columns of columnar data stored in the columnar data storage 2132 into a different type of data format for storage in the object storage 2106 (e.g., a record-based type of data format).
[0280] In some examples, the schema may be extensible. In other words, a third party may request minor changes to the schema (e.g., by editing the schema through an API or uploading a new schema that includes minor changes). The minor changes can include adding new optional fields to the schema. In some examples, however, the user may not be able to add new required fields or remove existing required fields from the schema. The schema may be updated without rebuilding or regenerating the entire data set. As such, a new columnar stripe may be added for a newly added optional field without modification of the existing columnar stripes.
[0281] In some examples, a third party user may be able to change field names and add aliases for field names. For example, a schema may include a field that is named "Time." A third party user may decide to change the name of the field to "LocalTime." As such, newly submitted data records may include "LocalTime" fields, while data records that are already stored by the data storage and processing service 2102 may include "Time" fields. The data storage and processing service 2102 may recognize the fields "Time" and "Local Time" as aliases of each other. As an example, the data storage and processing service 2102 may store an index that matches field names to a unique identifier for a data element. The index may associate both the "Time" and "Local Time" aliases to a unique identifier for a field (e.g., the identifier "1A452BC"). As such, the unique identifier may be associated with and designate a single columnar stripe that all the data values for the "Time" and "Local Time" fields. In some examples, the data records stored in the object storage 2106 also identify fields with unique identifiers and do not identify fields with names that can change.
[0282] Also, the data may be directly output to the remote computing device, additionally or instead of placing the data in a table or an object. For example, a remote computing device requesting the query may in response receive the results of the query. The results of the query may be in various formats (e.g., CSV or data for reconstructing a display of the output table).
[0283] FIG. 23 is a block diagram of computing devices 2300, 2350 that may be used to implement the systems and methods described in this document, as either a client or as a server or plurality of servers. Computing device 2300 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device 2350 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other similar computing devices. Additionally computing device 2300 or 2350.
[0284] Computing device 2300 includes a processor 2302, memory 2304, a storage device 2306, a high-speed interface 2308 connecting to memory 2304 and high-speed expansion ports 2310, and a low speed interface 2312 connecting to low speed bus 2314 and storage device 2306. Each of the components 2302, 2304, 2306, 2308, 2310, and 2312, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 2302 can process instructions for execution within the computing device 2300, including instructions stored in the memory 2304 or on the storage device 2306 to display graphical information for a GUI on an external input/output device, such as display 2316 coupled to high speed interface 2308. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 2300 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
[0285] The memory 2304 stores information within the computing device 2300. In one implementation, the memory 2304 is a volatile memory unit or units. In another implementation, the memory 2304 is a non-volatile memory unit or units. The memory 2304 may also be another form of computer-readable medium, such as a magnetic or optical disk.
[0286] The storage device 2306 is capable of providing mass storage for the computing device 2300. In one implementation, the storage device 230 2304, the storage device 2306, or memory on processor 2302.
[0287] The high speed controller 2308 manages bandwidth-intensive operations for the computing device 2300, while the low speed controller 2312 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 2308 is coupled to memory 2304, display 2316 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 2310, which may accept various expansion cards (not shown). In the implementation, low-speed controller 2312 is coupled to storage device 2306 and low-speed expansion port 23.
[0288] The computing device 2300 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 2320, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 2324. In addition, it may be implemented in a personal computer such as a laptop computer 2322. Alternatively, components from computing device 2300 may be combined with other components in a mobile device (not shown), such as device 2350. Each of such devices may contain one or more of computing device 2300, 2350, and an entire system may be made up of multiple computing devices 2300, 2350 communicating with each other.
[0289] Computing device 2350 includes a processor 2352, memory 2364, an input/output device such as a display 2354, a communication interface 2366, and a transceiver 2368, among other components. The device 2350 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components 2350, 2352, 2364, 2354, 2366, and 2368, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
[0290] The processor 2352 can execute instructions within the computing device 2350, including instructions stored in the memory 23 2350, such as control of user interfaces, applications run by device 2350, and wireless communication by device 2350.
[0291] Processor 2352 may communicate with a user through control interface 2358 and display interface 2356 coupled to a display 2354. The display 2354 may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 2356 may comprise appropriate circuitry for driving the display 2354 to present graphical and other information to a user. The control interface 2358 may receive commands from a user and convert them for submission to the processor 2352. In addition, an external interface 2362 may be provide in communication with processor 2352, so as to enable near area communication of device 2350 with other devices. External interface 2362 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
[0292] The memory 2364 stores information within the computing device 2350. The memory 2364 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 2374 may also be provided and connected to device 2350 through expansion interface 2372, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 2374 may provide extra storage space for device 2350, or may also store applications or other information for device 2350. Specifically, expansion memory 2374 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory 2374 may be provide as a security module for device 2350, and may be programmed with instructions that permit secure use of device 2350. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
[0293] 2364, expansion memory 2374, or memory on processor 2352 that may be received, for example, over transceiver 2368 or external interface 2362.
[0294] Device 2350 may communicate wirelessly through communication interface 2366, which may include digital signal processing circuitry where necessary. Communication interface 23 2368. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 2370 may provide additional navigation- and location-related wireless data to device 2350, which may be used as appropriate by applications running on device 2350.
[0295] Device 2350 may also communicate audibly using audio codec 2360, which may receive spoken information from a user and convert it to usable digital information. Audio codec 2360 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 2350. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 2350.
[0296] The computing device 2350 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 2380. It may also be implemented as part of a smartphone 2382, personal digital assistant, or other similar mobile device.
[0297].
[0298].
[0299].
[0300].
[03302] Although a few implementations have been described in detail above, other modifications are possible. Moreover, other mechanisms for generating and processing columnar storage representations of nested records Amit Agarwal, Fremont, CA US
Patent applications by Andrew Kadatch, Redmond, WA US
Patent applications by Michael Sheldon, Seattle, WA US
User Contributions:
Comment about this patent or add new information about this topic: | http://www.faqs.org/patents/app/20120016901 | CC-MAIN-2014-52 | refinedweb | 22,648 | 52.29 |
Hello - Struts
Hello Hi Friends,
Thakns for continue reply
I want to going with connect database using oracle10g in struts please write the code and send me its very urgent
only connect to the database code
Hi
Hello - Struts
Hello Hi,
Can u tell me what is hard code please send example of hard code... Hi Friend !
Hard coding refers to the software... into the source code of a program or other executable object, or fixed formatting of the data
Hello ..
Hello .. Hello,
I need a code for ..
Want to read csv file (which is having name nd mobile number) from jsp and if i give search by name... displayed .
the code must be like deployable jar file so that i can run it in which
hello
character is vowel or not.
Hello Friend,
Try the following code...hello i have to write a program that stores vowels (a,e,i,o and u...)
{
Hello - Struts
Hello Hi friends,
I ask some question please read carefully and let me know I want to create installation file........it is possible to create .exe file in java.....if i am using core java for logic events and control
Struts 2 Hello World Example
Struts 2 Hello World Example - Video Tutorial that shows how to create Hello... how to develop 'Hello World'
application in Struts 2 framework. We... the step-by-step process to create the Hello World
application using the Struts
struts code - Struts
struts code Hi all,
i am writing one simple application using struts framework. In this application i thought to bring the different menus of my application in a tree view.Like what we have in window explorer when we click
Hello
Hello Hello sir i want to store upload doc file in ms access by using servlet. Can i store file in access.one another things access only text size is 255 character but my file is up to 2mb how i can store
hello
hello what is the code for adding groups in contacts using servlet and jsp???pls help me
hello
hello i have to write a program in swings so that the program could exhibit the use of shortcut keys like ctrl+s.
and another one is that i have to create an applet with a button labelled "who's number one" and whenever i click
pls review my code - Struts
pls review my code Hello friends,
This is the code in struts. when i click on the submit button.
It is showing the blank page. Pls respond soon its urgent.
Thanks in advance.
public class LOGINAction extends Action
Pls review my code - Struts
Pls review my code Hello friends,
this is my code in struts action class
this page contains checkboxes and radiobuttons also.
when i enter...(15,aleForm.getCommand());
System.out.println("1111111");
int
Deploying Struts and testing struts 2 hello world application
;
Here is the output of struts 2 Hello World
Application:
When run this application...Deploying Struts and testing struts 2 hello world application
...\struts2\struts2helloworld\WEB-INF\src>
Testing Struts 2 Hello
Java Bean tags in struts 2 i need the reference of bean tags in struts 2. Thanks! Hello,Here is example of bean tags in struts 2:http... code will help you learn Struts 2.Thanks
Hello world
. Be careful when you write the java code in your text pad because java
is a case...
to) Hello World
Write the following code into your note
pad to run the Hello...
Hello world (First java Hello
I like to make a registration form in struts inwhich... compelete code.
thanks Hi friend,
Please give details with full source code to solve the problem.
Mention the technology you have used
Struts - Struts
Struts Hello
I have 2 java pages and 2 jsp pages in struts... with source code to solve the problem.
For read more information on Struts visit... for getting registration successfully
Now I want that Success.jsp should display
Java - Struts
Java hello friends,
i am using struts, in that i am using tiles framework. here i wrote the following code in tiles-def.xml
in struts-config file i wrote the following action tag
Struts 2.1.8 Hello World Example
Struts 2.1.8 Hello World Example
... to develop simple Hello World
example in Struts 2.8.1. You will also learn how... started with the Hello World tutorial using Struts 2.1.8:
The web configuration
Display Hello even before main get executed??
){
System.out.println("Darling");
}
}**
I want result as Hello! Thank you!! when I run...Display Hello even before main get executed?? I have a class (main... prompt and quick reply....
I got the required output with below code.
public
Struts - Struts
Struts Hello !
I have a servlet page and want to make login page in struts 1.1
What changes should I make for this?also write struts-config.xml and jsp code.
Code is shown below
Hello Sir I Have problem with My Java Project - Java Beginners
Hello Sir I Have problem with My Java Project Hello Sir I want Ur Mail Id To send U details and Project Source Code,
plz Give Me Ur Mail Id
hello there i need help
hello there i need help : i need to do a program like... OPtions:
once i have chosen an option then i should proceed here
if i choose b:
YOur Balance is __
if i chose D:
Enter you deposit amount:
if i choose W
Struts Code - Struts
Struts Code Hi
I executed "select * from example" query and stored all the values using bean . I displayed all the records stored in the jsp using struts . I am placing two links Update and Delete beside each record .
Now I
PHP Hello Video Tutorial for Beginners
on browser when called from browser.
Here is the code of the "Hello...Learn PHP Hello Video Tutorial - for beginners
This PHP Hello video tutorial teaches you how to create your first "Hello World"
example in PHP
Error - Struts
Error Hi,
I downloaded the roseindia first struts example and configured in eclips.
It is working fine. But when I add the new action and I... to test the examples
Run Struts 2 Hello
Core Java Hello World Example
will create here a Hello World Java program then I will explain the
terms what...Create Java Hello World Program
This tutorial explains you how to create a simple core Java "Hello World"
application. The Hello World application
Hello World Program
Hello World Program write a java program that continuously prints HelloWorld! to the screen(once every second ) and exists when press the enter key
Hi Friend,
Try the following code:
public class
When i click on Monitor Tomcat, it shows
When i click on Monitor Tomcat, it shows To run servlet i have seen.../introductiontoconfigrationservlet.shtml
Hello i followed each and every step same to same as given, i have installed java 7 and tomcat 7,
when i click on Monitor Tomcat it shows - Java Beginners
"code to large error for try statement" I have one idea this page is break...Hello Hi friends,
I have some query please suggest me
I have 290 fields and very large form......I want to insert data in the table
Hello - Java Beginners
Hello Hi vineet,
In the vendor form added one field extra in database,in html page and .jsp also but i am input all fields in the form... code where to add the new field.
Thanks
struts-config.xml - Struts
struts-config.xml in struts-config.xml i have seen some code like
in this what is the meaning of "{1}".
when u used like this? what is the purpose of
pls review my code - Struts
pls review my code When i click on the submit page i am getting a blank page
Pls help me.
thanks in advance.
public ActionForward execute(
ActionMapping mapping,
ActionForm form,
HttpServletRequest request
please help me solve this problem when i am create database connection using servlecontext
servletcontext . in this code when i login first time it will exceute sucessfully but when i use again the same page it will throw sql exception but i don't...please help me solve this problem when i am create database connection using
struts
struts i have one textbox for date field.when i selected date from datecalendar then the corresponding date will appear in textbox.i want code for this in struts.plz help me
Reply - Struts
Reply
Thanks For Nice responce Technologies::--JSP
please write the code and send me....when click "add new button" then get the null value...its urgent... Hi
can u explain in details about your project
Very simple `Hello world' java program that prints HelloWorld
HelloWorld.java - the source code for the "Hello, world!"
program... to compile the source code.
When you compile the program you'll create a byte... this output
Hello, world!
Understanding the HelloWorld.java code
Let's
Struts Hello Experts,
How can i comapare
in jsp scriptlet in if conditions like
jQuery Hello World
jQuery Hello World example
... application called
"Hello World jQuery". This application will simply display...'s start developing the Hello World application in jQuery.
Video Tutorial
Struts file uploading - Struts
when required.
I could not use the Struts API FormFile since...Struts file uploading Hi all,
My application I am uploading files using Struts FormFile.
Below is the code.
NewDocumentForm
struts
the checkbox.i want code in struts...struts I have no.of checkboxes in jsp.those checkboxes values came from the databases.we don't know howmany checkbox values are came from
Smarty Hello World Program
How to write "Hello World" program?
In any smarty program we need two files: a) .php file and b).tpl file.
i) .php file: Which... folder and store the tpl files inside the templates folder.
First example:
i
Spring Hello World prog - Spring
Spring Hello World prog I used running the helloworld prog code... getting null pointer exception. as shown below. I added all the jars and my folder structure is similar to what showed in the website. I guess the null pointer
Reply - Struts
Reply Hello Friends,
please write the code in struts and send me I want to display "Welcome to Struts" please send me code its very urgent... connection
Thanks HelloWorld.jsp
Struts 2 Hello World
How to check a checkbox - Struts
How to check a checkbox Hello Community,
How can i check a checkbox...
--------------------------- Thanks for this code snippet.
I meant... tags i am getting an error that the property "checked" is not defined. Can someone
Hello world (First java program)
. Be careful when
you write the java code in your text pad because java is a case...; !=(not equal
to) Hello World
Write the following code into your note
pad to run the Hello...
Hello world (First java program)
write data to a pdf file when i run jsp page
to the libraries.the pdf file are not opened when i execute the program.please send the code to open the pdf file when i execute the jsp page...write data to a pdf file when i run jsp page Hi,
<%@page import...://
I hope that, this link will help you
automatically break line when ever I put enter.
automatically break line when ever I put enter. code is working fine... StringBuffer(OriginalMsg);
for(int i = 0; i < sb.length(); i++){
int temp = 0;
temp = (int)sb.charAt(i);
temp
Java Hello World code example
Java Hello World code example Hi,
Here is my code of Hello World...(String[] args) {
System.out.println("Hello, World... Tutorials and download running code examples.
Thanks
code
: "+e);
}
}
}
However, when the preceding code is executed it results... code to create the RMI
client on the local machine:
import java.rmi.*;
public class HelloClient
{
public static void main(String args[])
{
try
{
Hello h
Struts Tutorials
is provided with the example code. Many advance topics like Tiles, Struts Validation...
2.ApplicationResources_it.properties
Strictly Struts
Hello readers. Yes this is an article... application development using Struts. I will address issues with designing Action
Struts - Struts
Struts Hi,
I m getting Error when runing struts application.
i have already define path in web.xml
i m sending --
ActionServlet...
/WEB-INF/struts-config.xml am new to struts.Please send the sample code for login and registration sample code with backend as mysql database.Please send the code immediately.
Please its urgent.
Regards,
Valarmathi Hi
Jkmegamenu drop downs moving left when window is resized in Chrome
on. It works and looks nice, however when the web browser(Chrome) window is resized the drop down divs move left.
I got the JavaScript code from this link...Jkmegamenu drop downs moving left when window is resized in Chrome
Struts + HTML:Button not workin - Struts
Struts + HTML:Button not workin Hi,
I am new to struts. So pls... in same JSP page.
As a start, i want to display a message when my actionclass... displays "null";
I have defined button_clicked in bean class.
I thought when i
Struts 2 hello world application using annotation
Struts 2 hello world application using annotation
Annotation... be a compiler instruction that describes how to
compile the code... an action Annotation you may write code as
@Action(value="/actionName"
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://www.roseindia.net/tutorialhelp/comment/13829 | CC-MAIN-2015-18 | refinedweb | 2,230 | 75.3 |
The only problem I am running into when I run this program is that fact that my 2nd for loop doesn't print out 1,2,3 just 0,1,2. I understand why it prints out the second type but how do I fix it ? My first thought was to replace it with for(int i=1;i<3;++i) but that will skip my 0 position element all together or give me a run-time error
If anyone can help me solve this problem It would be very helpful.
Code:#include <iostream> #include<vector> #include<string> using namespace std; int main() { string game0,game1,game2; vector<string>myGames; vector<string>::iterator iter; cout<<"What is Your Favorite Game :"; cin>>game0; cout<<"What is Your 2nd Favorite Game :"; cin>>game1; cout<<"What is Your 3rd Favorite Game : "; cin>>game2; myGames.push_back(game0); myGames.push_back(game1); myGames.push_back(game2); cout<<"\n\t\tTop 3 Favorite Games\n"; for( iter=myGames.begin(); iter!=myGames.end();++iter) cout<<". "<<*iter<<endl; int choice; char replace; cout<<"\nDo you want to replace a game (y/n) :"; cin>>replace; while (replace=='y') { for(int i = 0; i<myGames.size();++i) cout<<i<<". "<<myGames[i]<<endl; cout<<"\nWhat is Your Choice :"; cin>>choice; switch (choice) { case 1: myGames.erase(myGames.begin()); cout<<"\nWhat's Your New #1 Game ? "; cin>>game0; myGames.insert(myGames.begin(),game0); break; case 2: myGames.erase(myGames.begin()+1); cout<<"\nWhat's Your New #2 Game ? "; cin>>game1; myGames.insert(myGames.begin()+1,game1); break; case 3: myGames.erase(myGames.begin()+2); cout<<"\nWhat's Your New #3 Game ? "; cin>>game2; myGames.insert(myGames.begin()+2,game2); break; default: cout<<"Wrong Number Try Again\n"; continue; } cout<<"\nDo you want to replace another (y/n) :" ; cin>>replace; cout<<"\n"; } cout<<"\n\t***Top 3 Favorite Games***\n"; for(int i = 0; i<myGames.size();++i) cout<<i<<". "<<myGames[i]<<endl; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/133925-menu-numbering-problem.html | CC-MAIN-2015-35 | refinedweb | 320 | 64.41 |
Checking (and Upgrading) Template Engines in Eleventy
Yesterday a follower on Twitter encountered an interesting issue with Eleventy that turned into a bit of a bigger issue. Let's start with his question.
So it seems like LiquidJS added support for where filters but maybe that hasn't been rolled into Eleventy yet?— Richard Herbert (@richardherbert) February 6, 2020
The
where filter in Liquid provides a simple way to select values in an array by simple property matching. So consider this array:
[ {"name":"Fred","gender":"male"}, {"name":"Ginger","gender":"female"}, {"name":"Bob","gender":"male"}, {"name":"Lindy","gender":"female"} ]
I've got four cats with names and genders. By using the where filter on gender, I could select different cats like so:
{% assign male_cats = cats | where: "gender", "male" %} {% assign female_cats = cats | where: "gender", "female" %} <h3>Male Cats</h3> {% for cat in male_cats %} {{ cat.name }}, {{ cat.gender }}<br/> {% endfor %} <p/> <h3>Female Cats</h3> {% for cat in female_cats %} {{ cat.name }}, {{ cat.gender }}<br/> {% endfor %}
If you run this in Eleventy though, you get this:
The
assign works fine, but there's no filtering.
Why?
Turns out Eleventy ships with an older version of the Liquid template engine. This then leads to the question, how do you know what version Eleventy ships with? If you go to the docs for Liquid in Eleventy, you'll see it isn't mentioned. I raised an issue on this saying the docs should make it more clear (for each engine obviously). It could actually be in the docs and I don't see it of course.
Luckily though you can provide your own version of Liquid (or Nunjucks, or Handlebars, etc) by using
eleventyConfig.setLibrary in your
.eleventy.js file. The docs show this example:
module.exports = function(eleventyConfig) { let liquidJs = require("liquidjs"); let options = { extname: ".liquid", dynamicPartials: true, strict_filters: true, root: ["_includes"] }; eleventyConfig.setLibrary("liquid", liquidJs(options)); };
I gave this a shot. I made a new directory, did
npm i liquidjs, and tried this code, but it threw an error. I checked the docs for liquidjs and saw that their initialization code was a bit different. I copied their code and ended up with this:
module.exports = eleventyConfig => { let { Liquid } = require('liquidjs'); let engine = new Liquid(); eleventyConfig.setLibrary("liquid", engine); }
Woot! But huge caveat here. Eleventy passes in it's own default options for Liquid. In my sample above I passed none so I'm using the liquidjs defaults instead. This could lead to backwards compatibility issues. This is discussed in another issue.
So what version of Liquid does Eleventy ship? The user @DirtyF commented that by using
npm outdated in a repo with Eleventy you can see the following:
Package Current Wanted Latest Location ejs 2.7.4 2.7.4 3.0.1 @11ty/eleventy handlebars 4.7.1 4.7.3 4.7.3 @11ty/eleventy liquidjs 6.4.3 6.4.3 9.6.2 @11ty/eleventy mustache 2.3.2 2.3.2 4.0.0 @11ty/eleventy
You could use this as a way to figure out exactly what features you have available when using your desired template language.
As I raised in my issue, I think Eleventy needs some kind of "statement" or plan about how it does upgrades, when/how it handles backwards compatibility, etc. I don't think there is an easy solution for this but I'm hoping to be able to help the project with this effort. (If you can't tell, I'm rather enamored with it. ;)
An Alternative
So what if you don't want to muck with how Liquid works in Eleventy? Well you've got options, lots of em!
One way is to just use a conditional:
{% for cat in cats %} {% if cat.gender == "female" %} {{ cat.name }}, {{ cat.gender }}<br/> {% endif %} {% endfor %}
While this implies looping over every record, keep in mind this is only done in development. In production it's just a plain static HTML file.
Another option is to use filters. Liquid filters support arguments, so you could build this generic utility:
eleventyConfig.addFilter("where2", function(value, prop, val) { // assumes value is an array return value.filter(p => p[prop] == val); });
I named it
where2 just for testing but you would probably want something else. This lets you use the same format that the newer Liquid uses:
{% assign test_cats = cats | where2: "gender", "female" %}
Finally, as yet another option, consider switching engines. What do I mean by that? While Liquid is definitely my preferred engine, EJS is incredibly flexible when it comes to code in your template. To be honest, it's too flexibly imo and encourages you to do stuff in your templates I think you should do elsewhere. But that flexibility could be a lifesaver, and one of the most awesome features of Eleventy is that you can easily switch one document to another engine by just changing the extension.
Header photo by Daniel Levis Pelusi on Unsplash | https://www.raymondcamden.com/2020/02/07/checking-and-upgrading-template-engines-in-eleventy | CC-MAIN-2020-16 | refinedweb | 822 | 66.13 |
%load_ext autoreload %autoreload 2 %matplotlib inline %config InlineBackend.figure_format = 'retina'
Optimizing Linear Models
What are we optimizing?
In linear regression, we are:
- minimizing (i.e. optimizing) the loss function
- with respect to the linear regression parameters.
Here are the parallels to the example above:
- In the example above, we minimized f(w), the polynomial function. With linear regression, we are minimizing the mean squared error.
- In the example above, we minimized f(w) with respect to w, where w is the key parameter of f. With linear regression, we minimize mean squared error of our model prediction with respect to the linear regression parameters. (Let's call the parameters collectively \theta, such that \theta = (w, b).
Ingredients for "Optimizing" a Model
At this point, we have learned what the ingredients are for optimizing a model:
- A model, which is a function that maps inputs x to outputs y, and its parameters of the model.
- Not to belabour the point, but in our linear regression case, this is w and b;
- Usually, in the literature, we call this parameter set \theta, such that \theta encompasses all parameters of the model.
- Loss function, which tells us how bad our predictions are.
- Optimization routine, which tells the computer how to adjust the parameter values to minimize the loss function.
Keep note: Because we are optimizing the loss w.r.t. two parameters, finding the w and b coordinates that minimize the loss is like finding the minima of a bowl.
The latter point, which is "how to adjust the parameter values to minimize the loss function", is the key point to understand here.
Writing this in JAX/NumPy
How do we optimize the parameters of our linear regression model using JAX? Let's explore how to do this.
Exercise: Define the linear regression model
Firstly, let's define our model function.
Write it out as a Python function,
named
linear_model,
such that the parameters \theta are the first argument,
and the data
x are the second argument.
It should return the model prediction.
What should the data type of \theta be? You can decide, as long as it's a built-in Python data type, or NumPy data type, or some combination of.
# Exercise: Define the model in this function def linear_model(theta, x): pass from dl_workshop.answers import linear.)
Exercise: Initialize linear regression model parameters using random numbers
Using a random number generator,
such as the
numpy.random.normal function,
write a function that returns
random number starting points for each linear model parameter.
Make sure it returns params in the form that are accepted by
the
linear_model function defined above.
Hint: NumPy's random module (which is distinct from JAX's) has been imported for you in the namespace
npr.
def initialize_linear_params(): pass # Comment this out if you fill in your answer above. from dl_workshop.answers import initialize_linear_params theta = initialize_linear_params()
Exercise: Define the mean squared error loss function with linear model parameters as first argument
Now, define the mean squared error loss function, called
mseloss,
such that
1. the parameters \theta are accepted as the first argument,
2.
model function as the second argument,
3.
x as the third argument,
4.
y as the fourth argument, and
5. returns a scalar valued result.
This is the function we will be differentiating,
and JAX's
grad function will take the derivative of the function w.r.t. the first argument.
Thus, \theta must be the first argument!
# Differentiable loss function w.r.t. 1st argument def mseloss(theta, model, x, y): pass from dl_workshop.answers import mseloss
Now, we generate a new function called
dmseloss, by calling
grad on
mseloss!
The new function
dmseloss will have the exact same signature
as
mseloss,
but will instead return the value of the gradient
evaluated at each of the parameters in \theta,
in the same data structure as \theta.
# Put your answer here. # The actual dmseloss function is also present in the answers, # but _seriously_, go fill the one-liner to get dmseloss defined! # If you fill out the one-liner above, # remember to comment out the answer below # so that mine doesn't clobber over yours! from dl_workshop.answers import dmseloss
I've provided an execution of the function below, so that you have an intuition of what's being returned. In my implementation, because theta are passed in as a 2-tuple, the gradients are returned as a 2-tuple as well. The return type will match up with how you pass in the parameters.
from dl_workshop.answers import x, make_y, b_true, w_true # Create y by replacing my b_true and w_true with whatever you want y = make_y(x, w_true, b_true) dmseloss(dict(w=0.3, b=0.5), linear_model, x, y)
{'b': DeviceArray(-39.06814, dtype=float32), 'w': DeviceArray(-28.964378, dtype=float32)}
Exercise: Write the optimization routine
Finally, write the optimization routine!
Make it run for 3,000 iterations, and record the loss on each iteration. Don't forget to update your parameters! (How you do so will depend on how you've set up the parameters.)
# Write your optimization routine below. # And if you implemented your optimization loop, # feel free to comment out the next two lines from dl_workshop.answers import model_optimization_loop losses, theta = model_optimization_loop(theta, linear_model, mseloss, x, y, n_steps=3000)
Now, let's plot the loss score over time. It should be going downwards.
import matplotlib.pyplot as plt plt.plot(losses) plt.xlabel('iteration') plt.ylabel('mse');
Inspect your parameters to see if they've become close to the true values!
print(theta)
{'w': DeviceArray(2.003443, dtype=float32), 'b': DeviceArray(19.98238, dtype=float32)}
Summary
Ingredients of Linear Model
From these first three sections, have seen how the following components play inside a linear model:
- Model specification ("equations", e.g. y = wx + b) and the parameters of the model to be optimized (w and b, or more generally, \theta).
- Loss function: tells us how wrong our model parameters are w.r.t. the data (MSE)
- Optimization routine (for-loop)
Let's now explore a few pictorial representations of the model.
Linear Regression In Pictures
Linear regression can be expressed pictorially, not just in equation form. Here are two ways of visualizing linear regression.
Matrix Form
Linear regression in one dimension looks like this:
Linear regression in higher dimensions looks like this:
This is also known in the statistical world as "multiple linear regression". The general idea, though, should be pretty easy to catch. You can do linear regression that projects any arbitrary number of input dimensions to any arbitrary number of output dimensions.
Neural Diagram
We can draw a "neural diagram" based on the matrix view, with the implicit "identity" function included in orange.
The neural diagram is one that we commonly see in the introductions to deep learning. As you can see here, linear regression, when visualized this way, can be conceptually thought of as the baseline model for understanding deep learning.
The neural diagram also expresses the "compute graph" that transforms input variables to output variables. | https://ericmjl.github.io/dl-workshop/01-differential-programming/03-linear-model-optimization.html | CC-MAIN-2022-33 | refinedweb | 1,171 | 57.06 |
Taskbar Extensions.
- Unified Launching and Switching
- Jump Lists
- Destinations
- Tasks
- Customizing Jump Lists
- Thumbnail Toolbars
- Icon Overlays
- Progress Bars
- Deskbands
- Notification Area
- Thumbnails
- Related topics
Unified Launching and Switching
As of the Windows 7 taskbar, Quick Launch is no longer a separate toolbar. The launcher shortcuts that Quick Launch typically contained are now pinned to the taskbar itself, mingled with buttons for currently running applications. When a user starts an application from a pinned launcher shortcut, the icon transforms into the application's taskbar button for as long as the application is running. When the user closes the application, the button reverts to the icon. However, both the launcher shortcut and the button for the running application are just different forms of the Windows 7 taskbar button.
.
While the application is running, its taskbar button becomes the single place to access all of the following features, each discussed in detail below.
- Tasks: common application commands, present even when the application is not running.
- Destinations: recently and frequently accessed files specific to the application.
- Thumbnails: window switching, including switch targets for individual tabs and documents.
- Thumbnail Toolbars: basic application control from the thumbnail itself.
- Progress Bars and Icon Overlays: status notifications.
The taskbar button can represent a launcher, a single application window, or a group. An identifier known as an Application User Model ID (AppUserModelID) is assigned to each group. An AppUserModelID can be specified to override standard taskbar grouping, which allows windows to become members of the same group when they might not otherwise be seen as such. Each member of a group is given a separate preview in the thumbnail flyout that is shown when the mouse hovers over the group's taskbar button. Note that grouping itself remains optional.
As of Windows 7, taskbar buttons can now be rearranged by the user through drag-and-drop operations.
Note The Quick Launch folder (FOLDERID_QuickLaunch) is still available for backward compatibility although there is no longer a Quick Launch UI. However, new applications should not ask to add an icon to Quick Launch during installation.
For more information, see Application User Model IDs (AppUserModelIDs).
Jump Lists
A user typically launches a program with the intention of accessing a document or performing tasks within the program. The user of a game program might want to get to a saved game or launch as a specific character rather than restart a game from the beginning. To get users more efficiently to their final goal, a list of destinations and common tasks associated with an application is attached to that application's taskbar button (as well as to the equivalent Start menu entry). This is the application's Jump List. The Jump List is available whether the taskbar button is in a launcher state (the application isn't running) or whether it represents one or more windows. Right-clicking the taskbar button shows the application's Jump List, as shown in the following illustration.
By default, a standard Jump List contains two categories: recent items and pinned items, although because only categories with content are shown in the UI, neither of these categories are shown on first launch. Always present are an application launch icon (to launch more instances of the application), an option to pin or unpin the application from the taskbar, and a Close command for any open windows.
Destinations
The Recent and Frequent categories are considered to contain destinations. A destination, usually a file, document, or URL, is something that can be edited, browsed, viewed, and so on. Think of a destination as a thing rather than an action. Typically, a destination is an item in the Shell namespace, represented by an IShellItem or IShellLink. These portions of the destination list are analogous to the Start menu's recently used documents list (no longer shown by default) and frequently used application list, but they are specific to an application and therefore more accurate and useful to the user. The results used in the destination list are calculated through calls to SHAddToRecentDocs. Note that when the user opens a file from Windows Explorer or uses the common file dialog to open, save, or create a file, SHAddToRecentDocs is called for you automatically, which results in many applications getting their recent items shown in the destination list without any action on their part.
Launching a destination is much like launching an item using the Open With command. The application launches with that destination loaded and ready to use. Items in the destination list can also be dragged from the list to a drop destination such as an email message. By having these items centralized in a destination list, it gets users where they want to go that much faster, which is the goal.
As items appear in a destination list's Recent category (or the Frequent category or a custom category as discussed in a later section), a user might want to ensure that the item is always in the list for quick access. To accomplish this, he or she can pin that item to the list, which adds the item to the Pinned category. When a user is actively working with a destination, he or she wants it easily at hand and so would pin it to the application's destination list. After the user's work there is done, he or she simply unpins the item. This user control keeps the list uncluttered and relevant.
A destination list can be regarded as an application-specific version of the Start menu. A destination list is not a shortcut menu. Each item in a destination list can be right-clicked for its own shortcut menu.
APIs
- IApplicationDestinations::RemoveDestination
- IApplicationDestinations::RemoveAllDestinations
- IApplicationDocumentLists::GetList
- SHAddToRecentDocs
Tasks
Another built-in portion of a Jump List is the Tasks category. While a destination is a thing, a task is an action, and in this case it is an application-specific action. Put another way, a destination is a noun and a task is a verb. Typically, tasks are IShellLink items with command-line arguments that indicate particular functionality that can be triggered by an application. Again, the idea is to centralize as much information related to an application as is practical..
APIs
Customizing Jump Lists
An application can define its own categories and add them in addition to or in place of the standard Recent and Frequent categories in a Jump List. The application can control its own destinations in those custom categories based on the application's architecture and intended use. The following screen shot shows a Custom Jump List with a History category.
If an application decides to provide a custom category, that application assumes responsibility for populating it. The category contents should still be user-specific and based on user history, actions, or both, but through a custom category an application can determine what it wants to track and what it wants to ignore, perhaps based on an application option. For example, an audio program might elect to include only recently played albums and ignore recently played individual tracks.
If a user has removed an item from the list, which is always a user option, the application must honor that. The application must also ensure that items in the list are valid or that they fail gracefully if they have been deleted. Individual items or the entire contents of the list can be programmatically removed.
The maximum number of items in a destination list is determined by the system based on various factors such as display resolution and font size. If there isn't space enough for all items in all categories, they are truncated from the bottom up.
APIs
Thumbnail Toolbars
To provide access to a particular window's key commands without making the user restore or activate the application's window, an active toolbar control can be embedded in that window's thumbnail preview. For example, Windows Media Player might offer standard media transport controls such as play, pause, mute, and stop. The UI displays this toolbar directly below the thumbnail as shown in the following illustration—it does not cover any part of it.
This toolbar is simply.
Because there is limited room to display thumbnails and a variable number of thumbnails to display, applications are not guaranteed a given toolbar size. If space is restricted, buttons in the toolbar are truncated from right to left. Therefore, when you design your toolbar, you should prioritize the commands associated with your buttons and ensure that the most important come first and are least likely to be dropped because of space issues.
Note When an application displays a window, its taskbar button is created by the system. When the button is in place, the taskbar sends a TaskbarButtonCreated message to the window. Its value is computed by calling RegisterWindowMessage(L("TaskbarButtonCreated")). That message must be received by your application before it calls any ITaskbarList3 method.
API
- ITaskbarList3::ThumbBarAddButtons
- ITaskbarList3::ThumbBarSetImageList
- ITaskbarList3::ThumbBarUpdateButtons
- THUMBBUTTON
Icon Overlays. To display an overlay icon, the taskbar must be in the default large icon mode, as shown in the following screen shot.
.
Because a single overlay is overlaid on the taskbar button and not on the individual window thumbnails, this is a per-group feature rather than per-window. Requests for overlay icons can be received from individual windows in a taskbar group, but they do not queue. The last overlay received is the overlay shown.
APIs
Progress Bars
A taskbar button can be used to display a progress bar. This enables a window to provide progress information to the user without that user having to switch to the window itself. The user can stay productive in another application while seeing at a glance the progress of one or more operations occurring in other windows. It is intended that a progress bar in a taskbar button reflects a more detailed progress indicator in the window itself. This feature can be used to track file copies, downloads, installations, media burning, or any operation that's going to take a period of time. This feature is not intended for use with normally peripheral actions such as the loading of a webpage or the printing of a document. That type of progress should continue to be shown in a window's status bar.
The taskbar button.
AP.
APIs
Notification Area. When a notification balloon is displayed, the icon becomes temporarily visible, but even then a user can choose to silence them. An icon overlay on a taskbar button therefore becomes an attractive choice when you want your application to communicate that information to your users.
Thumbnails.
Note As in Windows Vista, Aero must be active to view thumbnails.
API
- ITaskbarList3::RegisterTab
- ITaskbarList3::SetTabActive
- ITaskbarList3::SetTabOrder
- ITaskbarList3::UnregisterTab
- ITaskbarList4::SetTabProperties
Thumbnail representations for windows are normally automatic, but in cases where the result isn't optimal, the thumbnail can be explicitly specified. By default, only top-level windows have a thumbnail automatically generated for them, and the thumbnails for child windows appear as a generic representation. This can result in a less than ideal (and even confusing) experience for the end user. A specific switch target thumbnail for each child window, for instance, provides a much better user experience.
API
- DwmSetWindowAttribute
- DwmSetIconicThumbnail
- DwmSetIconicLivePreviewBitmap
- DwmInvalidateIconicBitmaps
- WM_DWMSENDICONICTHUMBNAIL
- WM_DWMSENDICONICLIVEPREVIEWBITMAP
You can select a particular area of the window to use as the thumbnail. This can be useful when an application knows that its documents or tabs will appear similar when viewed at thumbnail size. The application can then choose to show just the part of its client area that the user can use to distinguish between thumbnails. However, hovering over any thumbnail brings up a view of the full window behind it so the user can quickly glance through them as well.
If there are more thumbnails than can be displayed, the preview reverts to the legacy thumbnail or a standard icon.
API
To add Pin to Taskbar to an item's shortcut menu, which is normally required only for file types that include the IsShortCut entry, is done by registering the appropriate context menu handler. This also applies to Pin to Start Menu. See Registering Shell Extension Handlers for more information.
Related topics | http://msdn.microsoft.com/en-us/library/dd378460(v=vs.85).aspx | CC-MAIN-2014-15 | refinedweb | 2,030 | 51.89 |
Test::CheckChanges - Check that the Changes file matches the distribution.
Version 0.14
use Test::CheckChanges; ok_changes();
You can make the test optional with
use Test::More; eval { require Test::CheckChanges }; if ($@) { plan skip_all => 'Test::CheckChanges required for testing the Changes file'; } ok_changes();
This module checks that you Changes file has an entry for the current version of the Module being tested.
The version information for the distribution being tested is taken out of the Build data, or if that is not found, out of the Makefile.
It then attempts to open, in order, a file with the name Changes or CHANGES.
The Changes file is then parsed for version numbers. If one and only one of the version numbers matches the test passes. Otherwise the test fails.
A message with the current version is printed if the test passes, otherwise dialog messages are printed to help explain the failure.
The examples directory contains examples of the different formats of Changes files that are recognized.
All functions listed below are exported to the calling namespace.
The ok_changes method takes no arguments and returns no value.
Currently this package parses 4 different types of
Changes files. The first is the common, free style,
Changes file where the version is first item on an unindented line:
0.01 Fri May 2 15:56:25 EDT 2008 - more info
The second type of file parsed is the Module::Changes::YAML format changes file.
The third type of file parsed has the version number proceeded by an * (asterisk).
Revision history for Perl extension Foo::Bar * 1.00 Is this a bug or a feature
The fourth type of file parsed starts the line with the word Version followed by the version number.
Version 6.00 17.02.2008 + Oops. Fixed version number. '5.10' is less than '5.9'. I thought CPAN would handle this but apparently not..
There are examples of these Changes file in the examples directory.
Create an RT if you need a different format file supported. If it is not horrid, I will add it.
The Debian style
Changes file will likely be the first new format added.
Please open an RT if you find a bug.
"G. Allen Morris III" <gam3@gam3.net>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~gam/Test-CheckChanges-0.14/lib/Test/CheckChanges.pm | CC-MAIN-2016-26 | refinedweb | 394 | 74.39 |
X then implement the web service and now we talk to that web service. The beauty of the web service is that we have no idea what’s happening at the server of the customer, someone might even be typing Xml documents and sends them to us. WSDL is one of the few standards that really works.
Creating a WSDL file manually is a tedious job. There are actually people that think creating a WSDL file manually is faster than letting Axis or XFire create it for you. So when I was asked to deliver a WSDL file I created an XFire project, wrote a java interface and some config files and the job was done.
The next step was creating a unit test for the web service, that’s what this article is about.
When you’re new to XFire it’s a good idea to have a look at the the quick start guide. This guide is pretty good, so I decided not to write a getting started guide on this blog. After you read the quick start guide everything else you’ll read here should be clear. Also take a look at this article at JavaWorld, that article helped me a lot.
The test web service
For this article I created a ping/pong web service. You invoke the ping method and will receive pong. My project is called xfire-test, I have an interface (nl.amis.ITestService) and a class (nl.amis.TestService)
package nl.amis;
public interface ITestService {
public String ping();
}
package nl.amis;
public class TestService implements ITestService {
public String ping() {
return "pong";
}
}
My services.xml contains one entry:
<beans xmlns="">
<service>
<name>test</name>
<namespace>amis</namespace>
<serviceClass>nl.amis.ITestService</serviceClass>
<implementationClass>nl.amis.TestService</implementationClass>
</service>
</beans>
Run the webservice server in Eclipse
I use Eclipse for my unit testing and run a Tomcat servce inside Eclipse. First shut down Eclipse and include the following dependency in Maven2:
<dependency>
<groupId>org.codehaus.xfire</groupId>
<artifactId>xfire-all</artifactId>
<version>1.0</version>
</dependency>
You also can include the XFire libraries manually of course.
To enable ‘web things’ on your project execute the following maven command:
mvn eclipse:eclipse –Dwtpversion=1.0
When you reload your Eclipse you’ll see a little earth icon on your project. You now have a dynamic web project, congratulations!
Right click on you project and choose ‘Run As’, ‘Run on server’. You might have to change some settings and include you favorite web container. In the end you should see a window that looks like this:
When you followed the quick start guide a WSDL should be available at (test this in your web browser, a chaotic xml file will appear)
First unit test
Now your web service is running in your browser and it’s time to unit test it
Create a test with the following code:
Client client = new Client(new URL(""));
Object[] results = client.invoke("ping", null);
assertEquals("pong", results[0]);
Client is a org.codehaus.xfire.client.Client object (when you have to choose between different Client objects)
Second unit test
This test method works, but if you use a little more complex code you can test a lot more and talking to an interface is better than including the method you want to invoke as a string.
The second test:
Service serviceModel = new ObjectServiceFactory().create(ITestService.class);
XFire xfire = XFireFactory.newInstance().getXFire();
XFireProxyFactory factory = new XFireProxyFactory(xfire);
String serviceUrl = "";
ITestService client = null;
client = (ITestService) factory.create(serviceModel, serviceUrl);
assertEquals("pong", client.ping());
You can invoke a method on the client object and XFire makes sure the SOAP call is executed. When your client returns a proprietary object you’ll even receive that object (given that you have they class for the object locally)
When your webservice throws an exception you can even catch that. But there is a little catch. When an exception is thrown a XFire exception is thrown with your own exception wrapped into it.
Conclusion
Unit testing a SOAP webservice is quite easy. But this method can also be used to create a client or only a WDSL file. The only drawback is that you still need the java Interface and
the webservice’s proprietary objec
ts. The nice thing of a webservice is that you don’t know whether you’re talking to Java, .NET, Ruby, Python or a Killer Coding Ninja Monkey
XFire is integrated in the Spring framework and when you follow the Codehaus Spring Remoting manual you won’t even notice you’re using a webservice, it just looks like a local method call.
You can only invoke a document/literal web service using XFire. I guess you have to use Axis or JDeveloper created stub for accessing a RPC/encoded web service.
Client client = new Client(new URL(“”));
Object[] results = client.invoke(“ping”, null);
assertEquals(“pong”, results[0]);
——————————————————————————–
how about the return type is complex and using Dynamic Client ?
I tried,it returned the Document object of apache.
What should I do if I want to call a webservice existing under jdk1.4 ?
thanks a lot
I haven’t checked it out. The deep integration isn’t necessarily a requirement, usually you just want simple access to a web service.
Thanks for the tip, I will check JAX-WS out, maybe it’s an even better library than XFire.
Hi! Have you checked out the JAX-WS RI (from GlassFish)? Spring runs on GlassFish but, do you want a deeper integration? If so, could you drop me an email to understand better what is missing?
In any case, you may want to check out this the comparison matrix at the Apache Wiki:
– eduard/o
Assume client.ping() in the example returns a complex object called ComplexObject. You have to have the class for your object locally and then it’s just
ComplexObject result=client.ping() instead of String result=client.ping();
how do i do unit testing if the return type is Complex and not simple? can we do the same thing in axis..m just curious | http://technology.amis.nl/2006/10/08/unit-testing-with-xfire-how-to-test-your-soap-server-with-a-wsdl-file/ | CC-MAIN-2014-42 | refinedweb | 1,018 | 65.01 |
- Type:
Suggestion
- Status: Open
- Priority:
P3: Somewhat important
- Resolution: Unresolved
- Affects Version/s: 4.8.2, 5.0.0 Beta 1, 5.8.0
- Fix Version/s: None
- Component/s: Core: Other
- Labels:
Currently qglobal.h disables some MSVC warnings if QT_NO_WARNINGS is defined:
#if !defined(QT_CC_WARNINGS) # define QT_NO_WARNINGS #endif #if defined(QT_NO_WARNINGS) # if defined(Q_CC_MSVC) # pragma warning(disable: 4251) /* class 'A' needs to have dll interface for to be used by clients of class 'B'. */ # pragma warning(disable: 4244) /* 'conversion' conversion from 'type1' to 'type2', possible loss of data */ # pragma warning(disable: 4275) /* non - DLL-interface classkey 'identifier' used as base for DLL-interface classkey 'identifier' */ # pragma warning(disable: 4514) /* unreferenced inline/local function has been removed */ ...
Defining QT_CC_WARNINGS is no real option, because i get tons of warnings for Qt code, if QT_CC_WARNINGS is defined and i am then unable to find the relevant places in my own code.
Is there a work-around for this?
If not then the Qt source code should be fixed such that it is not necessary to disable those warnings, because disabling those warnings in a vital header file like qglobal.h (which is included in many many other header files of Qt) does effect the code of the library user, which is not acceptable in security and safety critical applications (like those that we do develop). | https://bugreports.qt.io/browse/QTBUG-26877 | CC-MAIN-2020-16 | refinedweb | 225 | 50.87 |
How to Work With Someone Else's Code
How to Work With Someone Else's Code
The simplest rule to follow when working with someone else's code is the Boy Scout's motto: leave it better than we found it.
Join the DZone community and get the full member experience.Join For Free
See how three solutions work together to help your teams have the tools they need to deliver quality software quickly. Brought to you in partnership with CA Technologies.
It's an inevitable part of being a software engineer: We get stuck with making a change or adding a feature to code we did not create, are not familiar with, and is unrelated to our part of the system. Although this can be a tedious and difficult task, there is a great deal to be gained from being flexible enough to work with code written by another developer, including increasing our sphere of influence, fixing software rot, and learning about previously unknown parts of the system (not to mention, learning techniques and tricks from another programmer).
Taking into account both the tediousness and the advantages of working with code written by another developer, there are some serious pitfalls that we must watch out for:
- Our ego: We may think we know best, but we often do not. We are about to change code that we have very little knowledge about, including the intent of the original author, the decisions over the years that led to this code, and the tools and frameworks available to the original author when he or she wrote it. Humility is worth its weight in gold and it should be liberally applied.
- The ego of the original author: We are about to touch code written by another developer, another human with his or her own style, constraints, deadlines, and personal lives (that consume his or her time apart from work). It is only human nature for that person to become defensive when we start questioning the decisions he or she made or questioning why the code looks so unclean. We should make every effort to get the original author to work with us, rather than hinder us.
- Fear of the unknown: Many times, we are going to be touching code that we know very little or entirely nothing about. That can be scary: We will be responsible for any of the changes we make, but we are essentially walking around a dark house with no flashlight. Instead of being fearful, we need to establish a structure that allows us to be comfortable with making changes, both large and small and allows us to ensure that we have not broken existing functionality.
Being that developers, ourselves included, are humans, there is a lot that human nature plays into working with code written by another developer. In this article, we will walk through five techniques that we can use to ensure that we use our understanding of human nature to our advantage, gaining as much as we can from the existing code and the original author, and leaving the code in a better state than we found it in. Although this list is not comprehensive by any means, applying the techniques below will ensure that when we are finished change code written by another developer, we are confident that we have maintained the working state of the existing functionality, while at the same time, ensured that our new feature is working in harmony with the existing code base.
1. Ensure Tests Exist
The only truly confident way that we can ensure that the existing functionality in code written by another developer actually works as intended and that any changes we make to it do not keep it from working as intended, is to support the code with tests. When we come across code written by another developer, there are two states we can find it in: (1) without sufficient level of testing or (2) with a sufficient level of testing. In the former case, we are on the hook for creating those tests and in the latter case, we can use the existing tests to both ensure any changes we make do not break the code, as well as learning as much as we can about the intent of the code from the tests.
Creating New Tests
This is not an envious case: we are responsible for the changes we make to another developer's code, but we do not have a way to know for sure that we did not break something while we were making changes. Complaining will not help. Regardless of the condition we found the code in, by nature of touching the code, we will be responsible if it breaks. Therefore, we should take ownership of our actions as we make our changes. The only way to know that we do not break something is to write the tests ourselves.
Although this is tedious, it has the major advantage of allowing us to learn through writing the tests. Assuming that the code works right now, we need to write our tests so that expected inputs result in expected outputs. As we go through this test-writing process, we will start to learn about the intent and functionality of the code. For example, given the following code class SuccessfulFilter implements Predicate<Person> { @Override public boolean test(Person person) { return person.getAge() < 30 && ((((person.getSalary() - (250 * 12)) - 1500) * 0.94) > 60000); } }
we do not know much about the intent or the magic numbers used in the code, but we can create a set of tests where known inputs produce known outputs. For example, by doing some simple math and solving for the threshold salary that constitutes success, we find that if a person is under 30 and makes approximately $68,330 a year, they are considered successful (by the standards of this code). Although we do not know what those magic numbers are, we do know they are they reduce the original salary. Thus, the $68,330 threshold is a base salary before deductions. Using this information, we can create a few simple tests, such as the following:
public class SuccessfulFilterTest { private static final double THRESHOLD_NET_SALARY = 68330.0; @Test public void under30AndNettingThresholdEnsureSuccessful() { Person person = new Person(29, THRESHOLD_NET_SALARY); Assert.assertTrue(new SuccessfulFilter().test(person)); } @Test public void exactly30AndNettingThresholdEnsureUnsuccessful() { Person person = new Person(30, THRESHOLD_NET_SALARY); Assert.assertFalse(new SuccessfulFilter().test(person)); } @Test public void under30AndNettingLessThanThresholdEnsureSuccessful() { Person person = new Person(29, THRESHOLD_NET_SALARY - 1); Assert.assertFalse(new SuccessfulFilter().test(person)); } }
With just these three tests, we now have a general understanding of how the existing code works: if a person is under 30 and they make $68,300 per year, they are considered successful. While we could create many more tests to ensure corner cases (such as null ages or salaries) function correctly, a few short tests have given us not only an understanding of the original functionality, but also a suite of automated tests that can be used to ensure that we do not break existing functionality when we make changes to the existing code.
Using Existing Tests
In the event that there exists a sufficient test suite for the code, there is a still a great deal we can learn from the tests. Just as the with the tests we created, by reading the tests, we gain an understanding of how the code is intended to work at a functional level. Whatsmore, we gain an understanding of how the original author intended for the code to function. Even if the tests were written by someone other than the original author (before we came along), this still provides us with someone else's intent about the code.
While existing tests can be helpful, they should still be taken with a grain of salt. It is difficult to tell if the tests have kept up with the development changes to the code. If they have, we have a strong basis for understanding the code; if they have not, we have to be very careful not to be misled. For example, if original salary threshold were $75,000 per year, and was later changed to our $68,330 value, this outdated test could lead us astray:
@Test public void under30AndNettingThresholdEnsureSuccessful() { Person person = new Person(29, 75000.0); Assert.assertTrue(new SuccessfulFilter().test(person)); }
This test would still pass, but it would not pass for the intended reason. Instead of passing because it is exactly the threshold value, it is passing because it is over the threshold value. If this test suite included a test case that expected the filter to return
false if the salary is one dollar less than the threshold, this second test would fail, revealing that the threshold value is wrong. If the suite did not have such a test, it would be easy for stale data to mislead us about the actual intent of the code. When in doubt, trust the code: As we previously showed, solving for the threshold reveals that the test does not target the actual threshold.
Additionally, consult the repository logs (i.e. Git logs) for both the code and the test cases: if the last updates to the code are much more recent than the last updates to the tests (and significant changes have been made to the code, such as changing the threshold value), then it is likely the tests have fallen out of date and should be viewed with caution. Note that they should not be disregarded entirely, as they still might provide us with some documentation about the intent of the original author (or the developer who wrote the tests more recently), but they may contain stale or incorrect data.
2. Talk to the Person Who Wrote It
Communication is critical in any endeavor involving more than one person. Whether it is a company, a cross-country trip, or a software project, a lack of communication is one of the most reliable means of crippling a task. Although we should be communicating whenever we create new code, the stakes rise when we are touching existing code. In this case, we do not know much about the existing code, and what we do know may be misguided or only represent a fraction of the whole story. In order to truly understand the existing code, we need to talk to the person who wrote it.
When it comes time to ask questions, we need to be sure they are specific and are targeted at achieving our goal of understanding the code. For example:
- Where does this piece fit into the big picture of the system?
- Do you have any designs or diagrams for it?
- Any pitfalls I should be aware of?
- What does this or that component or class do?
- Is there anything you wanted to put into this code that you didn't at the time? Why?
Always be humble and seek out genuine answers from the original author. Almost every developer has had an instance where he or she looked at someone else's code and asked themselves, "Why on earth did he/she do that? Why didn't they just do this?" only to spend hours to come to the same conclusion as the original author. Most developers are talented programmers, so it is a good idea to assume that if we come across a seemingly poor decision, there is probably a good reason for having made it (there may not be, but it is better to go into someone else's code assuming there is a good reason; if there is not, we can make that change through refactoring).
Communication also has a secondary side-effect in software development. Conway's Law, originally created by Melvin Conway in 1967, states that:
Any organization that designs a system...will inevitably produce a design whose structure is a copy of the organization's communication structure.
That means that a large, tightly communicating team will likely produce monolithic, tightly-coupled code, while a set of smaller teams will likely produce more independent, loosely-coupled code (for more information on this correlation, see Demystifying Conway's Law). What this means for us is that our communication structure not only affects our particular piece of code but the entire code-base. Therefore, as much as it is a good idea to be in tight communication with the original author, we should check that we are not too tightly dependent on the original author. Not only will this likely annoy the original author, it may also produce unintended coupling in our code.
While this may be helpful to delve into our code, we are assuming that the original author can be reached. In many cases, the original author may have left the company or is just unreachable (i.e. on vacation). What do we do in that case? Ask someone who may have an idea about the code. This does not have to be someone who actually worked on the code, but it could be someone who was around when it was written or knows the person who wrote it. Even getting an idea from those around the author can shed some light on an otherwise unknown piece of code.
3. Remove All Warnings
There is a well-known concept in psychology called the Broken Window Theory that is poignantly presented by Andrew Hunt and Dave Thomas in The Pragmatic Programmer (p. 4-6). This theory, originally developed by James Q. Wilson and George L. Kelling, states the.
This theory states that is it human nature to disregard care for an item or thing if it appears to be uncared for already. For example, a person is much more likely to vandalize a building if it already appears disheveled. In terms of software, this means that it is human nature to make a mess of code if a developer finds that the code is already a mess. Essentially, we say to ourselves (albeit in not so many words), "If the last person didn't care about it, why should I?" or "My mess will be hidden underneath the rest of this mess."
That is no longer an excuse for us, though. The buck stops with us. Once we touch this code that previously belonged to someone else, we will be responsible for it and we will have to answer for it if it stops working. In order to make sure that we defeat this natural human tendency, we need to take small steps to make our code less disheveled (replace the broken windows).
One simple way of doing this is to remove all warnings from the entire package or module we are working with. In the case of unused or commented-out code, remove it. If we need this code later, we can always retrieve it from a previous commit in our repository. If there are warnings that cannot be solved directly (such as a raw-types warning), annotate the call or method with the
@SuppressWarnings annotation. This ensures that we are deliberate about our code: they are not warnings by negligence, but rather, we are explicitly noting the warnings (such as raw-types).
Once we have removed or explicitly suppressed all warnings, we must ensure that the code stays warning-free. This has two major implications:
- It forces us to be deliberate with any code we create.
- It reduces the change of code rot, where warnings now result in errors later.
This also has the psychological side-effect of showing others, as well as ourselves, that we actually care about the code we are dealing with. No longer is this a collection space, where we hold our nose, make a change, commit, and never look back. Instead, we are deliberate in our responsibility for this code. This also aids in future development, showing future developers that this is not a warehouse with broken windows: it is a code-base that is well maintained.
4. Refactor
Refactoring has become a very overloaded term in the last few decades and has recently become synonymous with any change to currently working code. Although refactoring does involve changes to code that is currently working, that is not the entire picture. Martin Fowler, in his aptly named seminal book on the topic, Refactoring, defines refactoring as:
A change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior.
The key to this definition is that it involves a change that does not alter the observable behavior of a system. This means that when we refactor code, we must have a means of ensuring the externally visible behavior of the code does not change. In our case, this means in the test suite that we inherited or developed ourselves. In order for us to ensure that we have not changed the external behavior of our system, each time we make a change, we have to recompile and execute the entirety of our tests.
Additionally, not every change we make is considered a refactoring. For example, renaming a method to better reflect its intended use is a refactoring but including a new feature is not. In order to see the benefit of refactoring, we will refactor the
SuccessfulFilter. The first refactoring we will perform is Extract Method to better encapsulate the logic for the net salary for a person:
public class SuccessfulFilter implements Predicate<Person> { @Override public boolean test(Person person) { return person.getAge() < 30 && getNetSalary(person) > 60000; } private double getNetSalary(Person person) { return (((person.getSalary() - (250 * 12)) - 1500) * 0.94); } }
After we make this change, we recompile and run our test suite, which continues to pass. Already, it has become easier to see that success is defined by the age and the net salary of a person, but the
getNetSalary method does not seem to belong to the
SuccessfulFilter as much as it does to the
Person class (the tell-tale sign is that the only argument to the method is a
Person and the only call the method makes is to a method of the
Person class, thus there is a strong affinity to the
Person class). In order to better situate this method, we perform a Move Method to move it to the
Person class: ((getSalary() - (250 * 12)) - 1500) * 0.94; } } public class SuccessfulFilter implements Predicate<Person> { @Override public boolean test(Person person) { return person.getAge() < 30 && person.getNetSalary() > 60000; } }
In order to clean this code up further, we perform multiple Replace Magic Number with Symbolic Constant actions for each of the magic numbers. In order to find the meaning for each of these values, we may have to talk to the original author, or someone with enough domain knowledge to lead us in the correct direction. We will also perform more Extract Method refactorings to ensure our existing methods are as simple as possible.
public class Person { private static final int MONTHLY_BONUS = 250; private static final int YEARLY_BONUS = MONTHLY_BONUS * 12; private static final int YEARLY_BENEFITS_DEDUCTIONS = 1500; private static final double YEARLY_401K_CONTRIBUTION_PERCENT = 0.06; private static final double YEARLY_401K_CONTRIBUTION_MUTLIPLIER = 1 - YEARLY_401K_CONTRIBUTION_PERCENT; getPostDeductionSalary(); } private double getPostDeductionSalary() { return getPostBenefitsSalary() * YEARLY_401K_CONTRIBUTION_MUTLIPLIER; } private double getPostBenefitsSalary() { return getSalary() - YEARLY_BONUS - YEARLY_BENEFITS_DEDUCTIONS; } } public class SuccessfulFilter implements Predicate<Person> { private static final int THRESHOLD_AGE = 30; private static final double THRESHOLD_SALARY = 60000.0; @Override public boolean test(Person person) { return person.getAge() < THRESHOLD_AGE && person.getNetSalary() > THRESHOLD_SALARY; } }
We recompile and test, and find that our system still works as intended: we have not changed the external behavior, but we have improved the reliability and internal structure of the code. For a more involved list of refactorings and the refactoring process, see Martin Fowler's Refactoring and the excellent Refactoring Guru website.
5. Leave it Better Than You Found It
The final technique is strikingly simple in concept and difficult in practice: leave the code better than you found it. As we comb through code, especially someone else's code, we have a tendency to add our feature, test it, and move on, paying no mind to the software rot we contributed or to the additional confusion our new methods may have added to a class. Therefore, the entirety of this article can be summed up in the following rule:
Whenever we make a change to code, ensure that we leave it in a better condition than when we found it.
As previously stated, we are now responsible for the damage to that class or code we have altered, and if it does not work, we will be responsible for fixing it. In order to combat the entropy that accompanies production software, we have to force ourselves to leave any code we touch better than we found it. Instead of shying away from the problem, we have to pay up on the technical debt, ensuring that the next person who touches this code will not have to pay the price, with interest. Who knows, it may be us in the future that will be thanking ourselves for that stitch in time. }} | https://dzone.com/articles/adding-functionality-to-legacy-code | CC-MAIN-2018-17 | refinedweb | 3,500 | 56.18 |
. * * @(#)schedule.c 8.1 (Berkeley) 5/31/93 * $FreeBSD: src/games/trek/schedule.c,v 1.4 1999/11/30 03:49:53 billf Exp $ * $DragonFly: src/games/trek/schedule.c,v 1.3 2006/09/07 21:19:44 pavalos Exp $ */ # include "trek.h" /* ** SCHEDULE AN EVENT ** ** An event of type 'type' is scheduled for time NOW + 'offset' ** into the first available slot. 'x', 'y', and 'z' are ** considered the attributes for this event. ** ** The address of the slot is returned. */ struct event * schedule(int type, double offset, char x, char y, char z) { struct event *e; int i; double date; date = Now.date + offset; for (i = 0; i < MAXEVENTS; i++) { e = &Event[i]; if (e->evcode) continue; /* got a slot */ # ifdef xTRACE if (Trace) printf("schedule: type %d @ %.2f slot %d parm %d %d %d\n", type, date, i, x, y, z); # endif e->evcode = type; e->date = date; e->x = x; e->y = y; e->systemname = z; Now.eventptr[type] = e; return (e); } syserr("Cannot schedule event %d parm %d %d %d", type, x, y, z); /* NOTREACHED */ return(NULL); } /* ** RESCHEDULE AN EVENT ** ** The event pointed to by 'e' is rescheduled to the current ** time plus 'offset'. */ void reschedule(struct event *e1, double offset) { double date; struct event *e; e = e1; date = Now.date + offset; e->date = date; # ifdef xTRACE if (Trace) printf("reschedule: type %d parm %d %d %d @ %.2f\n", e->evcode, e->x, e->y, e->systemname, date); # endif return; } /* ** UNSCHEDULE AN EVENT ** ** The event at slot 'e' is deleted. */ void unschedule(struct event *e1) { struct event *e; e = e1; # ifdef xTRACE if (Trace) printf("unschedule: type %d @ %.2f parm %d %d %d\n", e->evcode, e->date, e->x, e->y, e->systemname); # endif Now.eventptr[e->evcode & E_EVENT] = 0; e->date = 1e50; e->evcode = 0; return; } /* ** Abreviated schedule routine ** ** Parameters are the event index and a factor for the time ** figure. */ struct event * xsched(int ev1, int factor, int x, int y, int z) { int ev; ev = ev1; return (schedule(ev, -Param.eventdly[ev] * Param.time * log(franf()) / factor, x, y, z)); } /* ** Simplified reschedule routine ** ** Parameters are the event index, the initial date, and the ** division factor. Look at the code to see what really happens. */ void xresched(struct event *e1, int ev1, int factor) { int ev; struct event *e; ev = ev1; e = e1; reschedule(e, -Param.eventdly[ev] * Param.time * log(franf()) / factor); } | http://www.dragonflybsd.org/cvsweb/src/games/trek/schedule.c?rev=1.3 | CC-MAIN-2014-42 | refinedweb | 401 | 72.56 |
How Many AWS Services Are There?
A definitive answer… for the moment.
Pending January 2021 update: We are now just before re:Invent and the bottom line CloudPegboard.com count is already past 305. In the next few weeks it will surge ahead again. Come back in January for the post AWS re:Invent 2020 update.
January 2020 update: This post has been updated with sidebars to reflect final counts as of the end of 2019. Across multiple measures, AWS service growth was 13% since this was originally posted in May 2019.
I love tools! I’m not exactly sure why, but there is something magical about how a clever invention can magnify your abilities. I enjoy tools for both their form and function. I get weak-kneed looking at an antique brass sextant with its shiny mirrors and lenses and that rich golden brass color and reflectivity. And among that beauty is an incredible navigational instrument that allows precise navigation at sea during the day or night, arguably resulting in fundamental changes in the world by enabling precise open sea navigation.
If one tool is good, then more is better, right? Yes, and… there are a lot of problems to solve and good tools help us solve more problems better, faster, and more efficiently. However, sometimes the sheer volume of available choices has a cost. At times, too many choices can be overwhelming or can lead to using the wrong (or a suboptimal) tool for the job. How many times have you used a pliers to loosen a stubborn hex nut and rounded the edges when you should have selected the proper-sized socket or crescent wrench?
This brings us to AWS re:Invent. Each year I get on a manic high scribbling notes during both the Andy Jasse and Werner Vogels keynotes. I love tools, and for my attendance I’m rewarded with this piñata-like flood of new announcements to feed my nerdy tool cravings and allow me to develop solutions better and faster than I ever could before. For me, this is exceptionally invigorating.
And then comes the crash from my sugar high. Now I need to learn and understand dozens of new tools and features. Sure, continuing learning is part of my daily life and responsibility, but like you, I already feel like I don’t have time to keep up with and excel at my current responsibilities, let alone take the time to become expert at a host of new tools. I could look away, but I know they will help me, and that ignoring these new tools means I’m missing opportunities and falling behind. It takes some effort to learn how to use a sextant, but those who didn’t were surely left behind or shipwrecked.
I think about the benefits and challenges of tool choice a lot. In fact, I founded Cloud Pegboard to help manage this complexity. One of my first questions was how many services does AWS actually offer and what is the growth rate? As it turns out, this was not easily answered until we built our database of AWS services. Now that we have this curated database, I thought I’d answer the question for future travelers. Of course, the answer is not static, but since we track AWS changes daily, I’ll post updates when the counts materially change (and they are always available in realtime on this dashboard page).
January 2020 update: The statistics dashboard page has a variety of statistics including interesting charts showing the exponential growth of AWS announcements (6305 to date) and blogs (11,446 to date). There’s also some data on AWS re:Invent 2019 sessions (3797) that I find fascinating.
What do we count?
Before we answer the question “how many services does AWS offer?” we first need to decide what we are counting. It turns out, the count is relative to your context and what you want to know, and there are several dimensions that we can use for our counting. I’ll show a set of four dimensions (there are more) below so you can pick which ever matches your interests.
Namespaces
January 2020 update: AWS has stopped publishing the namespaces table. The last tally in 2019 was 182. Going forward, we will monitor IAM prefixes instead since they are very closely related. As of January 2020, there are 219 unique IAM prefixes. Note that some services use multiple IAM prefixes.
AWS defines namespaces for its services. Namespace values are primarily used for IAM permissions and for ARN resource identifiers. Namespaces are a technical view that’s a pretty good proxy for what we think of as a service (I’ll define a service as in an API-accessible collection of related functions). Counting the number of service entries in the namespaces table reveals there are 149 AWS services.
Using the count of namespaces as a proxy for growth, we can see the exponential growth of AWS services and why I’m at once invigorated and panicked at the growth of service capabilities (see my previous article on Cloud Architecture Principles to Enable Team Empowerment for some perspectives on why leveraging the newer services is a strategic advantage). In 2013, there were just 25 namespace entries. Between 2018 (95 services) and 2019 (149 services) there was a 57% growth in just one year. This is exponential growth. Will it continue?
“Ok, thanks for the answer Ken.” Hold on just a sec… we get a different answer if we look at the number of “Products.”
Products
January 2020 update: The current count of products is 191. That’s a 13% increase since the original post in May 2019.
AWS provides a top-level listing of all of its products. This is a less technical view and is more representative of the products or solutions AWS is making available to its customers. The count of products is 169 AWS products (there are 176 entries, but 7 are listed in two categories). The difference in counts between products and namespaces is that a namespace value is not required for some products and others just haven’t had their namespaces documented yet.
“169 shall be the number thou shalt count, and the number of the counting shall be 169.”
Well, maybe, but what if we look at the number of services that we can select via the console?
Console
January 2020 update: The number of unique console URLs is now 159. This is also a 13% increase since the original post in May 2019.
Arguably, the number of things you can select from the console constitutes the number of services. Counting console links, we’re down to 140 unique console links. This value may vary slightly depending on whether you are enabled for preview on certain services or depending on the region selected. (For completeness, the N. Virginia count is 141, but AWS WAF and AWS Shield share a console.) This is a useful number if you want to know how many unique consoles there are, but it is surely an undercount since some services don’t provide console access and others are wrapped under a single console that covers multiple services.
“I know what you’re thinking. Did he fire six shots or only five? Well, to tell you the truth, in all this excitement, I’ve kinda lost track myself.” — Clint Eastwood — Dirty Harry
The Union Set
January 2020 update: The current Cloud Pegboard catalog count is 283 services and tools. Again, this is a 13% increase since the original post.
At present, Cloud Pegboard collects data from dozens of different data sources and builds a database that is the curated union of all services and tools. As of today, the total count of all unique entries is 250 AWS services and tools. Of these, roughly 10% are tools such as SDKs, the AWS CLI and so on. There are only a small number of special cases such as AWS DeepRacer and AWS DeepLens that don’t fit typical service or tool categorizations. One of the contributing factors to the higher count is that some AWS services are managed as families and actually comprise several discrete services. For example, in the AWS product listing there is a single AWS Snow Family product, but the Cloud Pegboard database expands this to Snowball, Snowball Edge, and Snowmobile. Each of these are indeed separate and differentiated products with unique attributes and capabilities. Therefore, the bottom-line all-encompassing count is 250 AWS services and tools.
Comparison to Microsoft Azure and Google Cloud Platform
January 2020 update: The current count of Azure services is 261 (9% growth since May) and the current count for GCP 115 (17% growth since May).
Knowing the count for AWS of course begs the question about comparing to other cloud providers. While we are experts in the nitty gritty details of the catalog of all AWS services and their attributes, so far, we have far less depth for Azure and GCP. However, it turns out their data is much easier to count. Therefore, to help establish a point-in-time published record to aid in future comparisons and analyses, I’ll report our rough count. The current Azure listing contains 240 items and the current GCP listing shows 98 services. These numbers cannot be directly compared to the AWS numbers above any more than the count of AWS console entries can be compared to the count of AWS product entries. Nonetheless, the numbers are here as raw data for you to use as you see fit. In the future we will map apples to oranges to facilitate services comparisons based on functionality since comparing counts will never be truly meaningful.
Conclusion
Having just the right tool to optimally solve a problem can save a great deal of time, reduce downstream maintenance, enable greater agility, and ultimately deliver more value faster and cheaper. Since there is a vast variety of common problem elements across millions of solutions, it stands to reason that increasing the number of tools available will increase the number of opportunities to match a problem with a better tool and reap the benefits. The tradeoff is that to gain the value of reuse across the exponentially growing number of AWS services, we need to be able to find, learn, and understand our palette of options so we can bring them to bear when and where needed.
AWS offers a large variety of services spanning many different categories to match the diversity of technical needs in modern cloud-centric development. The number of services available varies depending on which dimension is used as the basis. As of May 2019, here is the tally:
Services by namespace counting: 149
Services by product listing counting: 169
Services with dedicated console URLs: 140
Bottom line union count of all services and tools: 250
January 2020 update:
Services by namespace counting: 182
Unique IAM prefixes: 219
Services by product listing counting: 191
Services with dedicated console URLs: 159
Bottom line union count of all services and tools: 283
“Right! One… two… five!” — King Arthur in Monty Python and the Holy Grail
About Cloud Pegboard
Cloud Pegboard helps AWS practitioners be highly effective and efficient even in the face of the tremendous complexity and rate of change of AWS services. We want to keep AWS fun and feel more like dancing in a sprinkler on a hot summer day and less like being blasted with a firehose. We make the AWS information you need amazingly easy to access, and with personalization features, we help you stay up to date on the AWS services and capabilities that are important to you and your specific projects. | https://medium.com/cloudpegboard/how-many-aws-services-are-there-51dda44fa946?source=friends_link&sk=d0fbe5835c4d103dbce4c2d255edf929 | CC-MAIN-2022-33 | refinedweb | 1,952 | 69.72 |
You can use threads in Scala, however, there is a better way. You can use threads the same way in Scala, too, but there are alternative approaches. It is yet one other important paradigm shift.
We have seen an example of message-driven concurrency using actors and the Akka framework.
Here is a contrived example, which sums up a list of numbers. The code forks a thread to compute the summation, simulates a delay, and returns the answer to the main thread:
--------ListSumTask.java-------- import java.util.List; import java.util.concurrent.TimeUnit; public class ListSumTask implements Runnable { private final List<Integer> list; private volatile int acc; // 1 public ListSumTask(List<Integer> list) { super(); this.list = list; this.acc ...
No credit card required | https://www.oreilly.com/library/view/scala-functional-programming/9781783985845/ch11s04.html | CC-MAIN-2019-43 | refinedweb | 123 | 61.22 |
Monitoring Amazon FSx for Lustre
With Amazon FSx for Lustre, you can monitor activity for your file systems using Amazon CloudWatch metrics.
Monitoring with Amazon CloudWatch
You can monitor file systems using Amazon CloudWatch, which collects and processes raw data from Amazon FSx for Lustre into readable, near real-time metrics. These statistics are retained for a period of 15 months, so that you can access historical information and gain a better perspective on how your web application or service is performing. By default, Amazon FSx for Lustre metric data is automatically sent to CloudWatch at 1-minute periods. For more information about CloudWatch, see What Is Amazon CloudWatch? in the Amazon CloudWatch User Guide.
As with Amazon EFS, Amazon S3, and Amazon EBS, Amazon FSx for Lustre CloudWatch metrics are reported as raw Bytes. Bytes are not rounded to either a decimal or binary multiple of the unit.
Amazon FSx for Lustre publishes the following metrics into the
AWS/FSx namespace in
CloudWatch. For each metric, Amazon FSx for Lustre emits a data point per disk per
minute. To view aggregate file
system details, you can use the
Sum statistic. Note that the file servers behind
your Amazon FSx for Lustre file systems are spread across multiple disks.
Amazon FSx for Lustre Dimensions
Amazon FSx for Lustre metrics use the
FSx namespace and provide metrics for a single
dimension,
FileSystemId. A file system's ID can be found using the
aws fsx describe-file-systems AWS CLI command, and it takes the form of
fs-01234567890123456. | https://docs.aws.amazon.com/fsx/latest/LustreGuide/monitoring_overview.html | CC-MAIN-2020-50 | refinedweb | 257 | 53.31 |
BusterJs with RequireJs/Backbone
BusterJs is a still-in-beta library that allows for testing your Javascript. It's got a wealth of cool features. The browser capturing is awesome for running your Javascript directly in the browsers you choose from one runner. You can also execute within Node. In short, it rocks. But, how to get this rockin' with your project, specifically your AMD RequireJs with BackboneJs combo project is the lock that must be opened before daily buster love can be had.
Install
Buster is easily installed everywhere (but apparently not in Windows, which I have not tried):
> sudo npm install -g buster
The buster docs indicate not to use sudo, but I'm reckless.
Buster Config
My directory structure looks something like:
proj/ src/ static/ js/ # here are the objects under test test/ tests/ # here are the tests buster.js # here is the buster config
My previous experience with setting up Jasmine testing with RequireJs was not entirely straightforward. BusterJs was not totally straightforward either, but it felt better. For one, it already has a runner. I just need to give it some config (
buster.js):
var config = module.exports; config['browser-all'] = { autoRun: false, environment: 'browser', rootPath: '../', libs: [ 'src/static/js/vendor/require-jquery-2.0.2.js', 'src/static/js/vendor/underscore-1.3.3.js', 'src/static/js/vendor/backbone-0.9.2.js' ], sources: [ 'src/static/js/**/*.js', 'src/static/js/**/*.handlebars' ], tests: ['test/tests/*.js'], extensions: [require('buster-amd')] };
A few salient points related to RequireJs / Backbone:
autoRun- Turning this off allows you to run buster tests manually. This is important from an AMD perspective, because the objects under test are loaded asynchronously. Only once they're loaded do we want to kick off the tests.
libs- Include the RequireJs, Underscore, and Backbone files here.
libswill put some script tags into the browser, so require will be ready once tests start executing. They're loaded first and in order (Underscore before Backbone is important).
sources- I was having problems with my handlebars template loader plugin until I realized that I need to list all sources, including templates, under this attribute. And don't forget '**' for subfolders.
extentions- buster-amd is a buster extension that helps with the AMD module loading. This will also require a
npm install buster-amd. As the buster-amd docs point out, you still need to list your sources and tests normally so they're available to the buster runner, so don't leave these out thinking they'll be magically available.
The other configuration options/details are well documented.
BusterJs Test Example
There are a few simple examples of other busterjs tests that test AMD modules. Mine looks something like:
buster.spec.expose(); require.config({ baseUrl: 'src/static/js/', paths: { text: './vendor/text-2.0.0', /* ... */ } });
describe('single backbone dependency', function(run) { require(['Widget'], function(widget) { run(function() { it('should load', function() { expect(true).toEqual(true); // nothing but test execution }); }); }); });
More from the peanut gallery:
buster.spec.expose()just pushes main buster functions into the wide-open namespace to be called willy nilly. Reckless -- again. :)
require.config- it saddens me, but I have had to include this within each test file. Others have commented that they could include this once in the buster.config
libs, but it didn't work for me. I also tried 'testHelpers', without the help they advertise. Please let me know if it does for you and what kind of pixie dust is required.
baseUrlneeds to jive with your buster rootPath so that your RequireJs relative paths will match up and work in your app runtime and in the test runtime.
run- notice this is called within the require callback manually.
BusterJs Runner
If you call within the next 15 minutes, the travel-size test runner is included. Operators are standing by. Start your test server:
> buster server
That will start a server at localhost:1111. Head 1+ of your local browsers to that address and capture them as your imprisoned slaves. They will do your bidding when you run the tests. Go to your project directory and run:
> buster test
If you've tied it all together, you should see something like:
> buster test Chrome 21.0.1180.49, OS X 10.7 (Lion): ..... 1 test cases, 1 tests, 1 assertions, 0 failures, 0 errors, 0 timeouts Finished in 0.02s
And now for a few parting tips...
Mismatched Define Module
If you happen to include a js file in your 'libs' attribute or another section that's loaded previous to your tests running that includes a
define() block, you're going to get stuck with this wonder:
Uncaught exception: ./src/static/js/vendor/require-jquery-2.0.2.js:1803 Uncaught Error: Mismatched anonymous define() module: function (module) {
As the require docs point out, to avoid this:
Be sure to load all scripts that call define() via the RequireJS API.
RequireJs 2.0 shim
I wasn't able to get the shim setup for getting underscore/backbone loaded and in the correct order. Instead, I just listed these non-AMD files in the correct order under the 'libs' attribute in buster.config. | https://jaketrent.com/post/busterjs-requirejsbackbone/ | CC-MAIN-2022-40 | refinedweb | 861 | 65.73 |
UPDATE #1: Starter Application Now Available! services.
Check out a video of the device in action, or scroll to the final step, where I have it embedded!
Step 1: What You'll Need
At the very least, you will need:
- A Google Home
- Any model Raspberry Pi
- GPIO Cables
- 5v 2 relay module
And the rest is software. If you are totally new to Raspberry Pi, be aware that you may need some additional hardware like usb cables or wifi chips in order to get up and running.
Step 2: On-Board Software Setup
So, to make this guide as user-friendly as possible, I'm going to include some links that you power-users might find excessive.
TLDR in advance: set up your raspberry pi on WiFi or Ethernet (preferably WiFi) and configure your router so that you have a server available externally. You'll use raspberry-gpio-python to control the relay.
For newer hobbyists, you will start out by setting up your raspberry pi.
You will want to get your raspberry pi set up on your local WiFi.
I'll be working in Node.js, so you will want to upgrade to the latest version of Node.
Configure the router so that port 80 redirects to your raspberry Pi's MAC address. (Sorry, this will depend on what router you're using, and there isn't really a universal guide)
I prefer using SSH to connect to my raspberry pi.
Plenty of things can go wrong in this process while you're starting out. Stay patient, and google things. The community is very supportive, and the odds are someone else has had your problem before!
Step 3: Make a Circuit
So, there are lots of guides on getting started with relays on the Rasberry Pi. I mostly used Youtube tutorials like this one to get started.
Basically, you will need to provide power from your Raspberry Pi's 5v out pin, and choose which control pins you want to use to send the on/off signal to trigger the relay.
Using the above image, I recommend using the yellow pins for whichever model you use.
Step 4: Create Your Server
Starter application now available!
Vist to download a starter application for this project, and follow the README to get it configured and running on your own device.
You can also check out my more-fleshed-out React project at if you are interested in seeing a slightly more complex version of the project
The main step is to build an Node + Express server that is able to handle POST requests.
In my code, it looks like this:
app.post('/api/switches/:id', function(req, res){ var foundSwitch = getSwitch(req.params.id); foundSwitch.toggle(); saveState(); console.log("postSwitch "+JSON.stringify(foundSwitch)); res.json(foundSwitch); })
I make a post request to /api/switches/:id, where id is written as sw1, sw2, and so on. After identifying the switch, I call a toggle() method to run my Python script and change the state of my relay.
I wrote individual python scripts for off and on functions, specifying which GPIO pin was tied to each switch. for example, sw1_on.py looks like:
import RPi.GPIO as GPIO<br>GPIO.setwarnings(False) GPIO.setmode(GPIO.BCM) GPIO.setup(23, GPIO.OUT)
Then, by requiring the Python-shell node module, I can execute the script, using:
<p>const PythonShell = require('python-shell');</p><p>PythonShell.run('./public/python/scripts/sw1_on.py')</p>
Looking back, this is a little bit tricky for a non-developer. I probably will need to throw in some starter code down the road.
Step 5: Connecting to the Google Home
If you've managed to get this far, this information is probably the only reason that you're here. That's fine! this is the cool bit.
You have your server running, and it can control a relay. It is structured so that a POST request can change the state of the relay. Now all you need is to get your Google Home to deliver a POST request to your device. Eventually, you will want to add some authorization so that strangers can't control your devices, but now we just want the request to work.
- Go to and connect it to your Google account.
- Go to, and click on the +this link.
- Select Google Assistant
- Choose "Say a simple phrase" as your trigger
- Tell Google what should trigger the action.
- I prefer to name use the device I want to control, so I said "turn my lamp on"
- Designate a response
- "Turning your lamp on"
- Click "Create Trigger" and proceed
- Click the +that link
- Choose the Gray icon (not WeMo Maker)
- Select "Make a web request"
Now, here is the important bit. Identify your IP address (or domain, if you set up that level of abstraction), and enter it into the URL portion. If you followed my the structure in my starter project, it will look like
Set Method to POST
Content Type should be text/plain
Body can be left blank
Create your action and choose Finish.
Step 6: Congrats!
You've done it! Your Google Home now knows how to communicate over HTTP with your smart device.
Since this does a toggle, you can technically keep saying "Turn the lamp on" to turn it on and off. I preferred to add duplicate on and off commands for each of my switches to make everything feel more comfortable.
If you enjoyed this project, please share! There's a LinkedIn post here that can really help me out.
I am a web developer struggling to find work in the SF Bay Area, so I could use some visibility. If you are looking to hire a JavaScript developer, or know someone who is, you can can get in touch with me at kylpeacock@gmail.com.
If you would like to contribute to this guide, or to work with me on building out a starter application, you can also feel free to get in touch! I want to make this process as easy as possible for new hackers. | http://www.instructables.com/id/Google-Home-Raspberry-Pi-Power-Strip/ | CC-MAIN-2017-22 | refinedweb | 1,018 | 71.85 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.