text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
!
Cheers for this. However I have an issue installing this plugin via the BApp Stope using the latest jython 2.7b3.
Error: [snip]cryptoAttack.py”, line 17, in
from java import awt;
ImportError: cannot import name awt
[snip]
Thanks for letting me know. That is strange since I am running the same version of jython and don’t get the error :/
I am also getting the same error
I had the same error above, but I fixed it by checking my version of jython.
I run Arch Linux and there are 2 versions of jython available to me;
dave /opt $ yaourt jython
1 community/jython 2.5.3-2 [installed]
An implementation of the Python language written in Java
2 aur/jython27 2.7b3-1 (8)
An implementation of the Python language written in Java
I had the latest version initially install, (jython27), so I replaced with jython 2.5.3-2 and the extention now loads just fine.
People encountering the error – I think this was fixed with this commit:. I am not sure the BAPP store is running the latest, but I pinged portswigger. Anyways, try getting the latest from github and see if the problem repros? | https://webstersprodigy.net/2014/10/28/crypto-attacker-burp-plugin/ | CC-MAIN-2018-17 | refinedweb | 199 | 71.34 |
Also Sprach Zarathustra is also the title of composer Richard Strauss's classical piece used in 2001: A space Oddysey. You know, the one with the brass and drums...
Thus Spake Zarathustra Part I Metanode
A BOOK FOR ALL AND NONE
This is the first division in the work by Friedrich Nietzsche, and seems a reasonable
place to collect them in this manner. For the record, this is an etext from Project
Gutenberg, and so is in the public domain. The translation, by Thomas Common, is also
in the public domain.
The first discourse, The Three Metamorphoses, I did not node, amoebius did. This
put me off the larger project for some time. However, when no more discourses were
forthcoming from that source, or any other, I resolved to carry out this project. I am
including the first discourse as a convenience to readers.
I read Thus Spake Zarathustra in high school, and again at university. I think it was
this very edition, with all its archaic, and quaint affectations--though presumably
good usage for its time. Ever since discovering Everything, I have wanted to node this
book.
There are four parts all told--and Zarathustra did tell. I will continue to node them as I
am able.
I have already noded the first thought of Zarathustra, and, because of its
appropriateness, included this link at the end of each discourse.
As Walter Kaufmann explains in his preface to Thus Spoke Zarathustra, Friedrich Nietzsche was a very ill man when he wrote his most popular work. Plagued by poor eyesight, an unsettled stomach, and insomnia helped only by severe medication, he relentlessly strove to be a happy man. Zarathustra, the proponent of his new philosophy, is a prophet of overcoming. He preaches of the overman, a new kind of being, a being who will overcome – overcome nausea. The overman will not overcome not the physical nausea which afflicted the writer, but his psychological nausea.
Zarathustra's nausea is intimately connected with the idea of the eternal return (discussed in greater detail below) – but Nietzsche's idea of a circular existence so permeates the book that it is impossible to tell when Zarathustra's journey actually begins. It is impossible to tell where the psychological nausea with life begins, and this is as it should be. Nausea with life is a part of life. Part of Nietzsche's total rejection of Christianity is his rejection of a point of creation. There is no fall from grace for Zarathustra, there is no exile from the garden – grace was never there, and no idyllic garden can shield us from the terror of existence. But one must begin somewhere, and the first time the reader is introduced to Zarathustra's nausea is in the second part of the book, when he must drink with the rabble:
Life is a well of joy; but where the rabble drinks too, all wells are poisoned. […] Are poisoned wells required, and stinking fires and soiled dreams and maggots in the bread of life? Not my hatred but my nausea gnawed hungrily at my life. Alas, I often grew weary of the spirit when I found that even the rabble had esprit. (Z II:6)
To rid himself of his nausea, Zarathustra must flee the cause, the stinking rabble. He must retreat into elitism, to metaphorically climb the mountain (again), to fight the spirit of gravity (itself a cause of nausea). The path to a clean well is hard going, but Zarathustra manages it:
How did I redeem myself from nausea? Who rejuvenated my sight? How did I fly to the height where no more rabble sits by the well? Was it my nausea itself which created wings for me and water diving powers? Verily, I had to fly to the highest spheres that I might find the fount of pleasure again. (Z II:6)
Later in Zarathustra's journey of overcoming he is again climbing the mountain, and he is besieged again by the spirit of gravity, preventing his approaching ever higher heights. It speaks to him thus:
Yesterday, toward evening, there spoke to me my stillest hour: […] “What do you matter, Zarathustra? […] It is the stillest words that bring on the storm. Thoughts that come on doves' feet guide the world. […] You must yet become as a child and without shame. The pride of youth is still upon you; you have become young late; but whoever would become as a child must overcome his youth too.” (Z II:22)
Part III of Thus Spoke Zarathustra begins with the wanderer hardening himself, and continuing up the mountain. He is beset by the spirit of gravity again, in the guise of a dwarf/mole, which represents his failure to become hard, his failure to be able to overcome himself. And as he is tested, his challenge again compels its own solution:
I was like one sick whom his wicked torture makes weary, and who as he falls asleep is awakened by a still more wicked dream. But there is something in me that I call courage; that has so far slain my every discouragement. This courage finally bade me stand still and speak: “Dwarf! It is you or I!” For courage is the best slayer, courage which attacks; for in every attack there is playing and brass. (Z III:2)
From this gateway, Moment, a long, eternal lane leads backward: behind us lies an eternity. Must not whatever can walk have walked on this lane before? Must not whatever can happen have happened, have been done, have passed by. (Z III:2.2)
P1. The world/universe is a finite object.
P2. In a finite space, the set of all possible events is finite.
P3. Time is infinite.
C. All possible events will happen an infinite number of times.
A young shepherd I saw, writhing, gagging, in spasms, his face distorted, and a heavy black snake hung out of his mouth. Had I ever seen so much nausea and pale dread on one face? He seemed to have been asleep when the snake crawled into his throat, and there bit itself fast. My hand tore at the snake and tore in vain; it did not tear the snake out of his throat. Then it cried out of me: “Bite! Bite its head off! Bite!” Thus it cried out of me—my dread, my hatred, my nausea, my pity, all that is good and wicked in me cried out of me with a single cry. (Z III:2.2)
“The great disgust with man—this choked me and had crawled into my throat […] 'Eternally recurs the man of whom you are weary, the small man' […] And the eternal recurrence even of the smallest—that was my disgust with all existence. Alas! Nausea! Nausea! Nausea!” (Z III:13)
Finally, it seems, the cycle breaks – as Zarathustra has resigned himself to the larger cycle's continuance. In Part IV, a soothsayer (probably a representation of the influence Schopenhauer had over Nietzsche) re-appears (having first visited Zarathustra in Part II) and forces him to recognize another danger:
“You proclaimer of ill tidings,” Zarathustra said finally, “this is a cry of distress and the cry of a man; it might well come out of a black sea? My final sin, which has been saved up for me—do you know what it is?”
“Pity!” answered the soothsayer from an overflowing heart, and he raised both hands. “O Zarathustra, I have come to seduce you to your final sin.” (Z IV: 2)
As soon as Zarathustra has achieved the overcoming of nausea that he wanted, he is faced with another challenge: the desire to share his knowledge with others. Pity, the final error, motivates him to come down from the mountaintop, to deliver his understanding – his new honey, to people. He must teach overcoming, as he has overcome his nausea. He begins his ministry anew, encountering various characters and preaching his doctrine of overcoming. Finally, at the end of Part IV, Zarathustra seems to overcome his pity:
Suddenly he jumped up. “Pity Pity for the higher man!” he cried out, and his face changed to bronze. “Well then, that has had its time! My suffering and my pity for suffering—what does it matter? Am I concerned with happiness? I am concerned with my work.” (Z IV:20)
Log in or registerto write something here or to contact authors.
Need help? accounthelp@everything2.com | http://everything2.com/title/Thus+Spake+Zarathustra?showwidget=showCs779771 | CC-MAIN-2015-22 | refinedweb | 1,410 | 71.14 |
Minor bug in default template
Create a new site and open up the "Simple" template.
Line 33-35:
{$ if nonblank .headline $}
<p><em>{$ .about $}</em></p>
{$ endif $}
I suppose line 33 should be:
{$ if nonblank .about $}
Henrik Jernevad
Wednesday, August 27, 2003
It looks weird but it's actually on purpose. That's a little trick we used so that the Index article itself (which doesn't have a headline) can use the same template as all the articles do, without having any "about the author" section.
Joel Spolsky
Wednesday, August 27, 2003
Oh.. cool. =)
Although, it really should check that both headline and about are nonblank, shouldn't it? But that's not possible in CityScript right now, perhaps.
Henrik Jernevad
Wednesday, August 27, 2003
That's definitely possible...
{$ if nonblank .headline $}
{$ if nonblank .about $}
<p><em>{$ .about $}</em></p>
{$ endif $}
{$ endif $}
since its nested the html will only appear if both are nonblank
Michael H. Pryor
Wednesday, August 27, 2003
Ohh.. once again, cool.. =)
I thought nested conditionals wasn't allowed. But perhaps it's only foreach:es that aren't possible to nest.
(Preparing to say "Oh, cool" once again, if I'm proven wrong one more time ;) )
forEach's can nest, too.
You just can't use the result of the outer forEach to decide which articles to include on the inner forEach, which makes this feature rather less useful.
Recent Topics
Fog Creek Home | http://discuss.fogcreek.com/CityDesk/default.asp?cmd=show&ixPost=9182&ixReplies=5 | CC-MAIN-2018-05 | refinedweb | 239 | 75.91 |
I answered a question today where someone asked for an example of setting a low-level keyboard hook with C#. I actually have an example of doing so in my May 2006 MSDN Magazine article on Managed Debugging Assistants, but the example is purposefully buggy in order to demonstrate the behavior of certain MDAs.
Here is an example without the bug (compile this as a console application):
using System;
using System.Diagnostics;
using System.Windows.Forms;
using System.Runtime.InteropServices;
class InterceptKeys
{
private const int WH_KEYBOARD_LL = 13;
private const int WM_KEYDOWN = 0x0100;
private static LowLevelKeyboardProc _proc = HookCallback;
private static IntPtr _hookID = IntPtr.Zero;
public static void Main()
{
_hookID = SetHook(_proc);
Application.Run();
UnhookWindowsHookEx(_hookID);
}
private static IntPtr SetHook(LowLevelKeyboardProc proc)
{
using (Process curProcess = Process.GetCurrentProcess())
using (ProcessModule curModule = curProcess.MainModule)
{
return SetWindowsHookEx(WH_KEYBOARD_LL, proc,
GetModuleHandle(curModule.ModuleName), 0);
}
}
private delegate IntPtr LowLevelKeyboardProc(
int nCode, IntPtr wParam, IntPtr lParam);
private static IntPtr HookCallback(
int nCode, IntPtr wParam, IntPtr lParam)
{
if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN)
{
int vkCode =Proc lpfn, IntPtr hMod, uint dwThreadId);
[DllImport(“user32.dll”, CharSet = CharSet.Auto, SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
private static extern bool UnhookWindowsHookEx(IntPtr hhk);
[DllImport(“user32.dll”, CharSet = CharSet.Auto, SetLastError = true)]
private static extern IntPtr CallNextHookEx(IntPtr hhk, int nCode,
IntPtr wParam, IntPtr lParam);
[DllImport(“kernel32.dll”, CharSet = CharSet.Auto, SetLastError = true)]
private static extern IntPtr GetModuleHandle(string lpModuleName);
}
Hi Stephen,
Do you have anything similiar for low-level mouse hook?
Thanks.
Regards,
Soumitra
Оказывается, это очень просто реализовать. Стивен показывает элементарный пример
After my last post on implementing low-level keyboard hooks in C#, Soumitra asked if it was possible…
Soumitra, I didn’t, but the code for doing low-level mouse hooks is almost identical to that for low-level keyboard hooks, so I posted a mouse hooks version for you at. Hope it’s helpful.
Hey
Great piece of code but it doesn’t output anything if I press Alt or AltGr. For all other keys it works.
ALT is a system key and won’t be handled by the hook because of the filter for WM_KEYDOWN in HookCallback.
I added a check for WM_SYSKEYDOWN and got the alt and altgr keys
private const int WM_SYSKEYDOWN = 0x0104;
great stuff!
I’m wondering how would you check for a combination of CTRL+V or CTRL+ALT+k or similar?
thanx.
System.Windows.Forms.Control.ModifierKeys should do the trick, telling you whether shift, alt, and/or control are pressed.
yes that’s true 🙂
but how would i use them in
if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN)
{
// this doesn’t work for me
int vkCode = Marshal.ReadInt32(lParam);
if ((Keys)vkCode == Keys.C && (Keys)vkCode == Keys.Control)
{
Console.WriteLine((Keys)vkCode);
}
}
lParam has only one value, no?
so vkCode is also just one value.
What am i missing here??
if (Keys.C == (Keys)vkCode && Keys.Control == Control.ModifierKeys)
thank you very much.
Thx, very good code
Thanks… I have a question. What would be the best way to take input from an attached device like a barcode reader and capture that information, but provide the application different data? Thanks in advance
Thinks, very goo-od!
I have tried the code. But I only get this error;
You’re probably using C# 1.x, rather than 2.0. This code is C# 2.0, making use of delegate inference.
How do I have the app "eat" up certain key strokes and not pass it up the chain?
For eg: I’d like to disable the LWin & RWin keys for the duration this program is running.
I tried the following:
switch ((Keys)vkCode)
{
// set of keys that should be trapped and nullified
case Keys.LWin:
case Keys.RWin:
return CallNextHookEx((IntPtr)0, 0, wParam, (IntPtr)0);
default:
// pass it along to windows
return CallNextHookEx(_hookID, nCode, wParam, lParam);
}
Is there a way to do the same thing in C#.NET 1.1 (as below) or is not possible?
—–;
Rok, if you want to eat the keystrokes, don’t call CallNextHookEx when you receive one of them.
Firemaple, sure, just change that to:
private static LowLevelKeyboardProc _proc = new LowLevelKeyboardProc(HookCallback);
C# 2.0 supports delegate inference (where the compiler knows that I wanted to create an instance of LowLevelKeyboardProc and so doesn’t make me type it out), a feature new since 1.1.
Does anyone know how one would go about getting this to work as a Windows Service? When running as a service, the event does not seem to fire. I’m thinking this has something to do with the interactivity of services.
How can i run KeyboardHook at backgroud and trap all keys at all programs.
I’m not clear on what you’re asking, as the code as it currently exist does intercept the keys for all processes.
i hav created a hook that will allow u to access all users on any network
to do this i just made a hook that allowed u to get all passwords and usernames.
Hi,
This article helped me a lot, presently I am trying to build a MSAA application in c#, there I am using IAccessible object. To get the IAccessble object pointer I need to call the AccessibleObjectFromEvent and need to pass the VARIANT structure. I am stuking here. Can u resolve this problem.
Thanks,
Raghavendra.
Cool, works well.
If you have the time can you please advise how I can insert keystrokes into the stream. e.g. What I would like to be able to do is hook my "q" key and replace it with two "a" key keystrokes
Cheers
Beast
You can send keystrokes to any app using standard window mechanisms and messages, for example the SendInput function from Win32 or the SendKeys.Send method from Windows Forms.
I can’t compile, it gives me an error in the "using System.Windows.Forms;" directive. Is there supposed to be this namespace since this is a console program?
Thanks for the code
There’s no problem in using the Windows Forms DLL from a console app. If you’re getting a compilation error on that line, it’s almost certainly because you’re not referencing the Windows Forms DLL when you compile.
Ah, ok, I got it working now. Sorry for the lame question, still getting accustomed to C#.
Thanks for the quick reply.
Great article! I’m trying to make a quick & dirty keyboard filter to simplify adding documentation boiler plate to .ASM code. This way I hope to get my Assembler students to actually do it!
Whenever I use SendKeys.Send(string), the control keys currently in effect are applied to the string by the application.
For example, if Notepad is running, and my filter does something like
if (Keys.C == (Keys)vkCode && Keys.Control == Control.ModifierKeys)
SendKeys.Send( "hi there" );
Notepad pops up a Replace dialog box because the ‘h’ in "hi…" is interpreted as Ctrl+h.
Any easy way to turn off the control key for Notepad in this type of case?
Thanks a lot, G.Montante
Will it work in Vista.
I don’t think so because hooking has some issues because of new Securities problems.
How should all be done in Vista??
I don’t know what’s the reason but I posted this same issue of Vista on many forums but no body use to reply and if some body even reply there is no concrete answer. Everyone revolves around Security Policies or applying XP2 security, but it solves my problem in no way.
NewUser, yes, it should work in Vista, but not in situations where a lower privileged application tries to control a more privileged application; Vista does include security features to prevent such hijacking.
Gary, unfortunately I can’t think of any good way to do that without running into a catch-22 situation. Nice idea, though, and best of lucking finding a solution.
For .Net 1.x, change the Following:
private static LowLevelKeyboardProc _proc;// = HookCallback;
[…]
public static void Main() {
_proc += new LowLevelKeyboardProc(HookCallback);
_hookID = SetHook(_proc);
Application.Run();
UnhookWindowsHookEx(_hookID);
}
hi steohen, i have made this code on windows application not in console after fixing error the program results no yhing could you help me please about how to make it working at windows application thanx.
hi stephen, i have made this code on windows application not in console after fixing error the program results no thing could you help me please about how to make it working at windows application thanx.
@Rok:
you can ‘return new IntPtr(1);’ for the key’s you want to block.
Rose, this should work fine in a Windows Forms application as long as you’re correctly pumping messages on the thread that installed the hook. Without knowing more about your code, though, I can’t help much.
Thanks for the code Stephen (and suggestions from others)! I needed to trap the function keys to setup a quick navigation within my application. One thing that I noticed is if you did not return quickly the keypress advances to the next hook and this was a problem as I wanted to display a dialog on F1 but the next application was getting the event. So I decided to record the key event, setup a timer for 100ms and process the key event when the timer is raised. This allows for immediate return of (1) which consumes the key press as desired. I hope this helps, and if anyone has a better method please let me know. Cheers Gary.
Hi Stephen,
Actually i am looking to develop a windows service with similar functionality.
For ex. if i press key "s" it should save a image.
so from the service ""onstart" method i am calling something like this
myKeys.RegisterKeyBoardHook();where myKeys is the object of InterceptKeys class and written
public void RegisterKeyBoardHook()
{
<Check to see whether the code pressed is S>
{
_hookID = SetHook(_proc);
StoreImage();
}
} in the InterceptKeys class.
But when i press the key "s" storeImage is not getting called..
Any help on this..
Regards,
Ravikanth
Hi Stephan,
Though Its a very worthy article,
I am left over with a question.
private static IntPtr HookCallback(
int nCode, IntPtr wParam, IntPtr lParam)
{
if (nCode >= 0 && (wParam == (IntPtr)WM_KEYDOWN || wParam == (IntPtr)WM_SYSKEYDOWN))
{
int vkSCode = Marshal.ReadInt32(lParam);
if (((Keys)(Keys.S)) == (Keys)vkSCode && ((Keys)(Keys.Control | Keys.Alt) == Control.ModifierKeys))
{
StoreImage();
}
}
return CallNextHookEx(_hookID, nCode, wParam, lParam);
}
using the above code snippet i am able to trap CTRL+ALT+S
How can i implement CTRL+ALT+S+L ??Bcoz there will be only one lParam…
Can you please reply me fast..
Regards,
Ravikanth
Watch for both WM_KEYDOWN and WM_KEYUP for both keys. You’ll need to look for receiving a WM_KEYDOWN for S and one for L before you receive a WM_KEYUP for whatever was received first. You won’t just get one event, because there are truly multiple keystrokes here.
Hi Stephan,
I written the code as below..
if (nCode >= 0 && (wParam == (IntPtr)WM_KEYDOWN || wParam == (IntPtr)WM_SYSKEYDOWN))
{
if (nCode >= 0 && (wParam == (IntPtr)WM_KEYDOWN || wParam == (IntPtr)WM_SYSKEYDOWN))
{
int vkCode = Marshal.ReadInt32(lParam);
if (((Keys)(Keys.S)) == (Keys)vkCode)
keySPressed = true;
if (((Keys)(Keys.L)) == (Keys)vkCode)
keyLPressed = true;
if (keySPressed && keyLPressed && ((Keys)(Keys.Control | Keys.Alt) == Control.ModifierKeys))
{
StoreImage();
}
}
else if (nCode >= 0 && (wParam == (IntPtr)WM_KEYUP || wParam == (IntPtr)WM_SYSKEYUP))
{
keySPressed = false;
keyLPressed = false;
}
}
return CallNextHookEx(_hookID, nCode, wParam, lParam);
}
But still i am getting lot of calls to "StoreImage".
Where i am going wrong ???
I don’t follow the logic you’re using. You could try something like the following, though I haven’t tested it. In general, I’m happy to try to help when I have time, but I don’t have time right now to debug random code.
private static bool keySPressed, keyLPressed;
private static bool ControlAltPressed
{
get
{
Keys mods = Keys.Control | Keys.Alt;
return (Control.ModifierKeys & mods) == mods;
}
}
private static IntPtr HookCallback(
int nCode, IntPtr wParam, IntPtr lParam)
{
Keys key = (Keys)Marshal.ReadInt32(lParam);
if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN)
{
if (key == Keys.S && ControlAltPressed) keySPressed = true;
if (key == Keys.L && ControlAltPressed) keyLPressed = true;
if (keySPressed && keyLPressed) Console.WriteLine("Pressed");
}
else if (nCode >= 0 && wParam == (IntPtr)WM_KEYUP)
{
if (key == Keys.S) keySPressed = false;
if (key == Keys.L) keyLPressed = false;
}
return CallNextHookEx(_hookID, nCode, wParam, lParam);
}
Note, too, that the above sample won’t catch all corner cases. For example, keySPressed and keyLPressed will remain true even if the user lets go of the Control or Alt keys (while still continuing to hold S and L).
hi stephen
do you have anything similar for the PocketPC
regards
john
Hi Stephan,
This is a more general question regarding local hooks. Is it possible to create local hooks for other applications present on the desktop, assuming I know thre thread id of this application?
Thanks
Phillip
Hi Stephan,
This is a more general question regarding local hooks. Is it possible to create local hooks for other applications present on the desktop, assuming I know the thread id of this application?
Thanks
Phillip
Is there a way to just intercept for current process key press? The sample code catch all process key action, right?
Thanks,
joe
Joe: Sure, see. The example traps mouse events, but the code for keyboard events is similar. If you search the Web, you’ll find a bunch of additional examples.
Philip: You can either create a local hook (just the current process) or a global hook (every process). If you create a global hook, when you receive an event, you can do things like check what window is currently in the foreground to deduce what app will be processing the message, and filter based on that.
Stephen…
So its either local on the current process only OR global on all processes.
I have an accounting application that I want to hook and subsequently trap keystrokes to add additional functionalty. I invoked the applications executable using the ‘CreateProcess’ API, obtained the thread id and defined my hook(s) based on that but the SetWindowsHookEx function failed!
Is this approach not possible based on your definition of what you can hook?
Thanks
Phillip.
From the documentation:
"The global hooks are a shared resource, and installing one affects all applications in the same desktop as the calling thread."
Additionally, most global hooks can’t be used from .NET… from support.microsoft.com/kb/318804:
."
Thanks. Yes, I was aware of the limitations, costs (performance) and ‘danger’ of global hooks in .NET, which is why my first choice would have been a local hook.
I guess you are saying that I cannot define a hook using the thread id from another process – is that correct?
Thanks
Phillip.
The only global hooks that work with .NET (due to the injection issue mentioned previously) are WH_KEYBOARD_LL and WH_MOUSE_LL, and those can’t be associated with a particular thread in another app; if you try, you’ll get back Win32 error 0x0595: "This hook procedure can only be set globally".
Okay. I understand the limitations on the scope of global hooks, however that is really not the way I actually want to go anyway.
My question was really on local hooks and whether you could use the thread id from another process to define your hook.
I guess you are saying you cannot do this and you are suggesting global hooks as an alternative – but bearing in mind their limitations in .NET.
Thanks
Phillip.
Thanks for this sample. I needed something similar for a small project I’m doing (basically grab keyboard strokes, And pastes info into the clipboard based on which application was active when the keypress was made), And this does the trick. Too bad the .NET API is still missing many native calls needed. Hopefully, By Orcas it will get better.
Hey Stephen,
Thanks for the code, but I have one problem though.
I have been trying to use the code in a WINFX .NET 3.0 project and I have managed to get it to work by adding System.Windows.Input to the namespace and changing Keys to Key as in:
int vkCode = Marshal.ReadInt32(lParam);
MessageBox.Show(Convert.ToString((Key)vkCode));
It debugs the project fine but when it comes to pressing a key, rather than showing a D, it outputs a Y instead.
Any ideas?
Thanks
The System.Windows.Forms.Keys and System.Windows.Input.Key enumerations have different values (Keys.D == 0x44, Keys.Y == 0x59, Key.D == 0x2F, and Key.Y = 0x44). The former maps more directly to virtual key codes. If you’re using WPF, you can use the public static method System.Windows.Input.KeyInterop.KeyFromVirtualKey to convert vkCode into a Key value.
-Stephen
Ah right. Thanks for your help! 🙂
– Dan
Hey All,
Great article and useful converstations! I was able to hookup the keyboard to a local process (using the WH_KEYBOARD [=0x02] hook and passing the process id to the SetWindowsHookEx API) and also by using Philips timer idea, was able to cache and control the key strokes.
Thanks all for the help;
– Arif
What do you mean "this isn’t working"? How exactly is it not working? Are you getting a compiler error? Is it throwing an exception? etc.
Firstly… wonderful piece of code.
I am building a Keyboard Basher for my baby daughter, and as such want to detect and redirect several keystrokes (like LWin and RWin).
Using your sample as part of a Windows form, I have no problem detecting and consuming these keys, but what if I want to act on them (and still not pass them on to the system)? How do I workaround not being able to call a non-static method directly in this code?
(PS I am using .NET 1.1 and have adapted the code accordingly based on your previous comments)?
One more thing: I’m using Vista 64-bit.
Im doing a similar hook in Delphi. I understand that the style is different from c#. I have a basic hook, but it actually traps the keys. I would like them to be read, but still passed on to the application that has focus. How would i do that?
Varix, just make sure to call CallNextHookEx in all places where you want the keys passed along.
Will this hook work with 3d games that use Directx?
Great code. Thank you very much.
Hi Stephen great code…
I have a question. I see that Im not alone, because people asked here before, but no answer…
Does anyone know how one would go about getting this to work as a Windows Service? When running as a service, the event does not seem to fire. Can you help Stephen or anyone?
Hi all
Good stuff Stephen and contributers also.
I did have the same question as evrastil about running this as a windows service… any ideas or pointers?
Thank you,
For everyone asking about Windows services and hooks, see:
Hope that helps,
Stephen
How in .NET framework 2.0 do I convert the Integer keyCode or IntPtr lParam value to the actual value entered by the user, not one of the Keys enumerators?
Hi,
First of thanks to Stephen Toub for the code. Currently my PC is running an app that changes mode if I press the F12 key. If I run your program it shows all keys except the F12 key (probably that program not passing it up to next hook as eating it up?). How can I make sure that your program first traps the F12 rather than that program? Although my ultimate goal is to not trap the event rather send/generate the F12 keystrokes from my program to change the mode (SendInput using user32.dll does not work, so I think it is using low level hook or something?) of that program using code.
Thanks
Hi,
I’m having the same problem Joao had back Jan. 16: I get the error in "using System.Windows.Forms." Apparently he got it to work, but I don’t understand the solution. How does one refrence the Windows Forms DLL when compiling?
I’m a real noob with C# and I’m using Visual C# 2005 Express.
Thanks,
Ed
Duh. I found the problem. I needed to add
System.Windows.Forms to the references in the Solution Explorer in Visual C# Express.
Thanks for the great code, Stephen!
Ed
Hi Stephen, Will this code work in the Windows Compact Framework? Im guessing using coredll.dll instead of user32.dll might work for some of those unmanaged commands but im not sure how that would translate with the rest of the code.
Thanks,
Mike
Do you know of any more forums related to this topic and a windows service? I’ve tried the above links and can’t find a solution.
I wish to temporarily disable the windows and alt+tab system keys which I can’t trap in my app. This occurs while a form is visible and then is released when the form is hidden.
My service already runs interactively (hence the form appearing), which is a proposed solution on one of the forums.
Many thanks for any help on this
i’m using ur code for low-level keyboard hook
but the above code throw an error
"namespace name ‘Process’ could not be found"
even after importing all the namespaced
can u plz help me in finding solution
thanks
Naveen Kushwaha
naveenkushwaha@gmail.com
Hello.. How can I differ between capital letter, other languages letters and regular letters
i.e :
c and C will give me the same results.
also for every other key like b and B.
Thanks!
Bob, you can use GetKeyState from user32.dll to check the status of the shift key, for example.
But it doesn’t gets me if it’s other languags.
also, if caps lock were hitted it doesn’t give me the state.
I would like to know which letter has been typed in Hebrew.
Any solution which make this work on vista.
Thank you, bur the code is too short , I am suspicions of its function.
Update: I finally resolved this. The code was using the WH_KEYBOARD flag. When run under Vista, the call to SetWindowsHookEx () would fail with a NULL return code. I changed the flag to WH_KEYBOARD_LL and Vista (and other platforms) are once again happy.
Hey, i want to ‘eat up’ keys, but your suggestion of not calling CallNextHookEx isn’t working. other applications receive input. any other ideas?
In HookCallback(), instead of returning nothing or the original value how can I return another key value? For example if ‘j’ was pressed how can I change it so that ‘k’ was pressed?
In HookCallback(), instead of returning nothing or the original value how can I return another key value? For example if ‘j’ was pressed how can I change it so that ‘k’ was pressed?
Hi,
How can I trap Alt+Tab keys? A similar question has been asked already by Jon on September 20 2007 but I Didn’t fined an answer.
Thanks,
Hezi
I don’t know if it is exactly the same … I want to exchange keypresses like user pressed X but it is replaced with Y.
I need it in a way that the following events get the replaced value.
this should work like this
user presses X >>>
1. replace event is called X is replaced with Y
2. KeyDown with "Y"
3. KeyPress with "Y"
4. KeyUp with "Y"
Stephen,
Thanks for providing your code and keeping up with all the comments/questions that people have added. That’s very cool of you.
I have another question.
I’m using your code – along with the other example you gave for listening to mouse events – to detect whether my windows forms app is currently in use. I’m only interested in events from MY app/process, but when I pass-in the thread id for the current thread, the hook doesn’t work.
Also, when I just pass in zero as the thread id, my little test app is slow to close (3 or 4 sec compared to almost-instantaneous when the hooks aren’t there). I assume this is b/c it’s listening for events from all processes.
So my question is: how can I set the listener to listen to my app/process only? Or, if that’s not possible, how can I speed things up?
Thanks in advance.
-Matt
Jon, if your goal is to eat the input (not calling CallNextHookEx), make sure you return (IntPtr)1 from the hook callback rather than IntPtr.Zero.
Hezi, the code as I have it is looking only for WM_KEYDOWN messages… for alt, you also need WM_SYSKEYDOWN.
Hunter and Stephan, see.
Matt, for an example of setting a hook only for the current process, see.
Is there anyway, to get the code to write the localized names of the keys out? When I press on "ÆØÅ" (three Danish letters) it says Oemtilde, Oem7 and Oem6.
Another thing: Is it possible to get the code not to register the key several times because it is pressed down for a while, so it only tracks the key once everytime you press and ‘unpress’?
I got the keydown fixed! 😀 Just changed the KEYDOWN to KEYUP and changed the 0x-thingy.
I still haven’t figured out how to use a specific keyboard layout… 🙁
I also can’t get this to run as a service, even allowing for interaction with the desktop. All I’m trying to do is get the time of the very last input on an XP machine (with multiple fast user-switching logins). I need to know if any user is still active so my service can react appropriately. Very grateful for any suggestions/comments.
Hi Stephen,
Do you have anything similiar for low-level mouse hook?
Thanks.
Stephen just wanted to thank you for all the hard work you have put in and for the patience while replying back to the questions.
Keep up the good work guys!
Hi – I’m trying to compile your program using Microsoft Visual C# Express Edition, but I get an error in the build process – it says ‘The type or namespace name ‘Windows’ does not exist in the namespace ‘System’ (are you missing an assembly reference?)’
How do I use an assembly reference?
Thanks,
Caspian
Hi toub, great code!
how to know the difference between eg. a (lowercase) and A (Uppercase)?
the same for other( b,B; c,C and so on…)
thnk a lot
Mark
You can use GetKeyState from user32.dll to determine the state of the shift key and the caps lock key. Alternatively, you can use Control.ModifierKeys to get the state of the shift key and Console.CapsLock to get the state of the caps lock key; internally, they just use GetKeyState.
Hi,
I want to hook Fn+ keys. For ex Fn+F1..F10.
Please advise How to go about it.
Thanks a lot.
i want hook for toggling windows
Stephen and Gary,
I notice a couple of comments about the issue sendkeys with the control key. I’m getting as Gary mentioned, the control (and alt) keys in addition to my send key string. Can either of you elaborate on the cause and workaround / solution for this situation.
Thanks for this great article and thread!
Ricky
PingBack from
I am having trouble capturing shift+{any kay} when it is pressed in continuous manner.
It goes like this..
1. You press shift+a
2. Release the ‘a’ key but dont release the shift key.
3. Press ‘b’. Here the program is supposed to capture shift+b but it only captures b.
I am using GetAsyncKeyState to get the status of shift key. If I release the shift key at step 2 above, the program capture shift+b fine.
Am I missing something here?
hello
i have a problem which seems a bit strange.
i have a small program that do the follow:
if (vkCode == key) sendkey.sendwait("^c");
when the key is the CTRL key the program runs well. if it’s a shift it doesn’t.
any idea?
Hi,
I want to do something on when key down and also on key up.
like
onKeyDown()
{
///do some thing
}
onKeyUp()
{
///do some thing
}
I am not familiar with c# and .net please can you help me out.
Hi.
Is there any possibility to place the self-defined hook procedure at the end of the hook chain?
E.g. if I monitor keyboard strokes and identify a ‘ctrl+c’ my hook fires before the copy-action (resulting from ctrl+c) starts. Is it possible to have my hook fire after this copy action?
Thanks in advance.
Cheers, Lasse.
I am having trouble capturing shift+{any kay}
I’m trying do send keys to a directx application (mictosoft train simulator), the classic sendkey doesn’t work….
I tryed to see if there’s something in the directx to emulate a keypress, but i still haven’t found anything.
Please help me!!
Thank you so much this works like a dream. You dont know how much this just helped me!!!
Is there a way to capture keys from multimedia keyboard?
I able to capture all keys except these keys "PLAY","PAUSE", "FORWARD", "REVERSE","HOME", and "POWER".
Thank you for your hard work.
Hello! Thank you for this great piece of code! It helped me a lot. Using the program above, I’m "recording" the keys pressed into a file. Everything is ok, I am able to handle situations when Shift or Caps are pressed.
My question: How can I implement a timer so the file gets empty between a specific interval (example: every 10 min. the file is emptied)?
I don’t know where should I declare the timer. I tried to declare it in Main(), something like:
<code>
_hookID = SetHook(_proc);
System.Timers.Timer timer = new System.Timers.Timer(30000);
timer.Elapsed += new ElapsedEventHandler(anotherclass.OnTimedEvent);
timer.Enabled = true;
Application.Run();
UnhookWindowsHookEx(_hookID);
<code>
The code above works just ONCE! The file gets emptied once…
Thank you. I hope you will reply soon.
Nicholas
Hi! Me again… The timer works fine, there was a problem somewhere else in my code… Thanks anyway.
Nicholas
kool application… this is what i was looking for from long time
Hey, why you use Application.Run() to call the HookCallback() method. Unfortunately, i got an error "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." when i suppose to call the "HookCallback()" method without Application.Run().
The error shows in the following code,
return CallNextHookEx(_hookID, nCode, wParam, lParam);
i’m using another form in my program, hence i need to call "HookCallback()" method without "Application.Run".
Is there any way to do like this….
hi! i am wating for an answer. how can i add a new item to the pop-up menu of the computer(right click).
thanks,
It’s very useful code. Thank you a lot.
Hello All!
I am making a service application which logging user activites like mouse moving and key pressing and here comes Hooks. First of all I tried it in a console application and everything was perfect, but when I put the code (and some other) to the service app. and started it, nothing happened. I am sure the code is good. I am running the service in the background with Local System privileges.
Is there anybody can help me?
Thanks!!
Okey, I found the way to this.
Here is the code:
put it in AfterInstall:
RegistryKey ckey = Registry.LocalMachine.OpenSubKey(@"SYSTEMCurrentControlSetServicesnServiceName",true);
if (ckey != null)
{
if (ckey.GetValue("Type") != null)
{
ckey.SetValue("Type", ((int)ckey.GetValue("Type") | 272));
}
}
Hello. I’m working on this bit as a windows service as well, in a Vista 32-bit environment. Unfortunately, this all works well and good until I run it as a service. Then it’s all failure. I noticed my service will actually briefly catch keys after the installer runs, but shortly after it’s shutdown and when I launch it from my services panel I get nothing. I tried the above mentioned fix, setting my type value to 272, but no avail. Has anyone else found a way around the Windows Service limitations?
Hi Toub,
I have a simple question:
Your code captures all key presses globally, right? Is there a way to limit those to a single application, like a way to specify in the hook so that key presses are only captured if that application is in focus?
Also do you know how to detect if shift/alt/control keys are pressed? I would like to add these info separately to the eventargs. Should I convert the keycode into Keys? If so, how? Because I am writing a DLL, not exe.
Thanks.
4gGood idea.2f I compleatly disagree with last post . nir
<a href="">ламинат купить</a> 6r
Hey i have written code that works perfect on windows and console apps that check all kinds of key combos and it works wonderful. But when i run it inside a service it wont work. This is my opinion services run under a different user session and therefore they catch keystrokes for that user only under which they are running. Now my question is: has anyone had the hook working under a service? If so can you please show us. I have tried a million things and nothing works. Also has anyone been able to run a .NET DLL under svchost? May be thats the answer. I havent been able to do neither one.
Hi, this is a great start for me – a newbie.
I have a java program that programatically toggles the Caps Lock key. Is there a way to allow the ‘programatic key strokes’ but block the actual key strokes from the keyboard?
Any advice at all would be gratefuly received.
Regards
Abe
Do you have a code to cature key board ket events using window service without using form.
Could you please suggest me where can I find this code
Thanks
Anil
AnilM@ForBiztech.com
Hi, have you ever tried to use comments in your code? I think they are fundamental.
原文:
How would I use this in a Windows Form Application?
Works almost perfect, just for a detail, I opened a instance of notepad and when I tried to close it, takes a lot to close.
Any suggestions?
I have a requirement to alert user that caps key is on. And i dont want to use any heavy interop. Is there any other way to do this.
Regards,
Pramod Pallath Vasudevan.
PingBack from
PingBack from
PingBack from
Please can someone tell me how to make keyboard hook from windows service? I was trying to allow interact width desktop but it is not working and I don’t get any exception…??
The reason this is not working in an windows app is the line :
Application.Run()
Then the very next line unhooks the hook.
So i removed the application run, i put the hook close in the formClosed event.
This works, however my call back is being called twice now?
Stephen, I’m writing a service that catch all printscreen and save the image in a directory.
My SendHook returns a number, but my HookCallback not respond to any click. Can you help me?
Thanks much for your code.
I’m modified it as an Class. Now I can use it like this:
KeyHook newKeyTrap = new KeyHook();
//IMPORTANT – Set Even Handle for KeyHook Return Event
newKeyTrap.keyHookReturn += new KeyHook.KeyHookReturn(HookReturn);
newKeyTrap.Start();
private void HookReturn(int keyEvent, int vkCode)
{
//Do Any things with vkCode
//You can use System.Windows.Forms.Keys class to check any button is pressed included Multi Media Key
//If you don’t know the name of that button in Keys Class just write out the vkCode and Find for that code in Keys Class
}
My Code also can detect which Window is focus in (By the Window Title)
ladaidong@yahoo.com
Thank you very much for your code.
3 years later still helping peoples.
thanx for the interesting posts
awesome piece of code. Cheers!
i have a requirement where i need to create a virtual keyboard and its keys should change on the culture of c# code.
Ao if A S D F keys are in english keyboard then if i change the culture to french it should show me French keyboard keys in place of A S D F it should start showing Q S D F
does anyone knows how to do that.
Возникла сегодня задача… Сделать возможность приложению реагировать на нажатие клавиш. Ничего сложного
hello….when i compile this code, i’m getting…" Error 1 ‘WindowsFormsApplication1.Form1.Dispose(bool)’: no suitable method found to override " ….i just copy pasted the above code to a new forms application…what do i have to do to make this code work?
Great Post Dear
I was looking exactly the same thing you did.
Thanks.
Keep it up.,
Great post.
I have question about Windows Mobile development.
Can we use the same code into the Windows Mobile application? If yes, then what about the OEM keys?
Thanks.
Regards,
Amit Rote
How to create a keyboard and mouse hook in Visual C++ 2005
Step 1: Create a CWinThread derived thread to house the hook procedures. The documentation tells us that hook procedures can have global or thread state. We are interested in thread state so we place our hook in a thread and then hook the Thread ID of the main .EXE, which can be DOC/VIEW or Dialog based.
Step 2: Instantiate the thread that contains the hook in the application.
Step 1 Code
#define WM_ENDTHREAD WM_APP + 100
// CHookThread
class CHookThread : public CWinThread
{
DECLARE_DYNCREATE(CHookThread)
public:
CHookThread(int iThreadId = GetCurrentThreadId());
virtual ~CHookThread();
virtual BOOL InitInstance();
virtual int ExitInstance();
public:
afx_msg void OnEndThread(WPARAM wParam,LPARAM lParam);
protected:
int m_iThreadId;
static HHOOK m_hKeyBdHook;
static HHOOK m_hMouseHook;
static LRESULT CALLBACK KeyBdHookProc(int nCode,WPARAM wParam,LPARAM lParam);
static LRESULT CALLBACK MouseHookProc(int nCode,WPARAM wParam,LPARAM lParam);
protected:
DECLARE_MESSAGE_MAP()
};
HHOOK CHookThread::m_hKeyBdHook = NULL;
HHOOK CHookThread::m_hMouseHook = NULL;
IMPLEMENT_DYNCREATE(CHookThread, CWinThread)
CHookThread::CHookThread(int iThreadId) : m_iThreadId(iThreadId)
{
}
CHookThread::~CHookThread()
{
// Unhook the keyboard
if (m_hKeyBdHook)
{
UnhookWindowsHookEx(m_hKeyBdHook);
m_hKeyBdHook = NULL;
}
// Unhook the mouse
if (m_hMouseHook)
{
UnhookWindowsHookEx(m_hMouseHook);
m_hMouseHook = NULL;
}
}
BOOL CHookThread::InitInstance()
{
// Start recording the keyboard messages
HINSTANCE hInstance = AfxGetInstanceHandle();
m_hKeyBdHook = SetWindowsHookEx(WH_KEYBOARD,CHookThread::KeyBdHookProc,hInstance,m_iThreadId);
if (!m_hKeyBdHook)
DebugLastError();
// Start recording the mouse messages
m_hMouseHook = SetWindowsHookEx(WH_MOUSE,CHookThread::MouseHookProc,hInstance,m_iThreadId);
if (!m_hMouseHook)
DebugLastError();
return TRUE;
}
int CHookThread::ExitInstance()
{
return CWinThread::ExitInstance();
}
// CHookThread message handlers
BEGIN_MESSAGE_MAP(CHookThread, CWinThread)
ON_THREAD_MESSAGE(WM_ENDTHREAD,&CHookThread::OnEndThread)
END_MESSAGE_MAP()
// Record the keyboard messages
LRESULT CALLBACK CHookThread::KeyBdHookProc(int nCode,WPARAM wParam,LPARAM lParam)
{
LRESULT Res = CallNextHookEx(m_hKeyBdHook,nCode,wParam,lParam);
return Res;
}
// Record the mouse messages
LRESULT CALLBACK CHookThread::MouseHookProc(int nCode,WPARAM wParam,LPARAM lParam)
{
LRESULT Res = CallNextHookEx(m_hMouseHook,nCode,wParam,lParam);
return Res;
}
void CHookThread::OnEndThread(WPARAM wParam,LPARAM lParam)
{
// End the thread
PostQuitMessage(0);
}
Step 2 Code
In the constructor of the dialog or the view class, create the thread that creates the hook. Make sure to give it the thread id of the current application!
// CRemoteDesktopView construction/destruction
Ctor::Ctor()
{
// Hook the keyboard and mouse
m_pHookThread = new CHookThread(GetCurrentThreadId());
m_pHookThread->CreateThread();
}
Ctor::~Ctor()
{
// Unhook the keyboard and mouse
m_pHookThread->PostThreadMessage(WM_ENDTHREAD,0,0);
}
This is all there is to it. Of course you are responsible for filling in the code after the actual event is received! In my case, I write PC remote control software and I use journaling to help me control the server by recording the messages on one machine and playing them back on another.
[url=][img][/img][/url]
[b][url=]music from greek [/url][/b]
I have converted it into vb.net xpress for who ever wants it. Enjoy.
Imports System
Imports System.Diagnostics
Imports System.Windows.Forms
Imports System.Runtime.InteropServices
Public Class Form1
Private Const WH_KEYBOARD_LL As Integer = 13
Private Const WM_KEYDOWN As Integer = &H100
Private Shared _proc As New LowLevelKeyboardProc(AddressOf HookCallback)
Private Shared _hookID As IntPtr = IntPtr.Zero
Private Shared Function SetHook(ByVal proc As LowLevelKeyboardProc) As IntPtr
Using curProcess As Process = Process.GetCurrentProcess()
Using curModule As ProcessModule = curProcess.MainModule
Return SetWindowsHookEx(WH_KEYBOARD_LL, proc, GetModuleHandle(curModule.ModuleName), 0)
End Using
End Using
End Function
Private Delegate Function LowLevelKeyboardProc(ByVal nCode As Integer, ByVal wParam As IntPtr, ByVal lParam As IntPtr) As IntPtr
Private Shared Function HookCallback(ByVal nCode As Integer, ByVal wParam As IntPtr, ByVal lParam As IntPtr) As IntPtr
If nCode >= 0 AndAlso wParam = CType(WM_KEYDOWN, IntPtr) Then
Dim vkCode As Integer = Marshal.ReadInt32(lParam)
‘Console.WriteLine(CType(vkCode, Keys))
MsgBox(CType(vkCode, Keys).ToString)
End If
Return CallNextHookEx(_hookID, nCode, wParam, lParam)
End Function
<DllImport("user32.dll", CharSet:=CharSet.Auto, SetLastError:=True)> _
Private Shared Function SetWindowsHookEx(ByVal idHook As Integer, ByVal lpfn As LowLevelKeyboardProc, ByVal hMod As IntPtr, ByVal dwThreadId As UInteger) As IntPtr
End Function
<DllImport("user32.dll", CharSet:=CharSet.Auto, SetLastError:=True)> _
Private Shared Function UnhookWindowsHookEx(ByVal hhk As IntPtr) As <MarshalAs(UnmanagedType.Bool)> Boolean
End Function
<DllImport("user32.dll", CharSet:=CharSet.Auto, SetLastError:=True)> _
Private Shared Function CallNextHookEx(ByVal hhk As IntPtr, ByVal nCode As Integer, ByVal wParam As IntPtr, ByVal lParam As IntPtr) As IntPtr
End Function
<DllImport("kernel32.dll", CharSet:=CharSet.Auto, SetLastError:=True)> _
Private Shared Function GetModuleHandle(ByVal lpModuleName As String) As IntPtr
End Function
Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
_hookID = SetHook(_proc)
Application.Run()
UnhookWindowsHookEx(_hookID)
End Sub
End Class
Thanks for the code I found it very very useful but how can we use for disable Alt+F4 ?
I tried my best with this code it doesn’t work. Actually when pressing Alt+F4 it is not even calling HookCallback function. So, there is no way to track it.
Is there anyone know any idea to get around this ?
Диски blu-ray фильмы по 20$ dvd фильмы диски новинки по 1.8$
в наличии и под заказ, доставка в течении 3х дней
[url=]
[img][/img][/url]
наш сайт
Very interesting site. I appreciate your comment about my black script Do you want a fresh joke from net? Who was Snow White’s brother? Egg White. Get the yolk?
How to build a [url=]career in finance[/url]? We know as it to do!
In a turning-point the in the most suitable way of all to do a career.
We drive avoid you herein 🙂
On our portal you desire catch sight of the first-class suggestions on stint!
And for you crisis?
you heard of new law, about an exit from a crisis:
agreement about [url=]debt consolidation[/url]
This agreement narrates about new on-line finance center, where any man, can get a credit.
Cool post, I’ll try this tonight!
Hi,
I tried your application and it works great except a little problem. May be you can help me out.
In the HookCallback function we get the keys like Keys.Oem1, Keys.A etc.. The problem is i want to convert keys.Oem1 to actual characters of the inputed keys that a "real" application sees. I have tried MapVirtualKeyW but it has limitation since it is blind to any keyboard state such as the shift key, etc. i believe WM_CHAR is the correct way to doing it but how 🙁 dont know. i also googled but no luck.
Thanks for your help in advance
Thanks! I had some code to capture the key press… but without the "Marshal.ReadInt32(lparam)", I didn’t know WHICH key was being pressed! This provided the missing link.
How to eat key strokes?
For example, i need to clear key buffer, to avoid print space, when i press WINKEY + SPACE, and perform some operation…
private static IntPtr HookCallback(
int nCode, IntPtr wParam, IntPtr lParam)
{
int vkCode = Marshal.ReadInt32(lParam);
if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN)
{
if ((Keys)vkCode == Keys.LWin ||
(Keys)vkCode == Keys.RWin)
{
WinKeyStillPressed = true;
}
}
if (nCode >= 0 && wParam == (IntPtr)WM_KEYUP)
{
if ((Keys)vkCode == Keys.LWin ||
(Keys)vkCode == Keys.RWin)
{ //clear flag
WinKeyStillPressed = false;
}
if (WinKeyStillPressed)
{
if ((Keys)vkCode == Keys.Space)
{
PlayPauseInner(); // My Styff
return CallNextHookEx((IntPtr)0, 0, wParam, (IntPtr)0); // does not eat key strokes
return (IntPtr)0; // does not eat key strokes
return (IntPtr)1; // does not eat key strokes
}
if ((Keys)vkCode == Keys.Left)
{
PlayRewind(); // My Styff
}
if ((Keys)vkCode == Keys.Right)
{
PlayForward(); // My Styff
}
}
}
return CallNextHookEx((IntPtr)0, 0, wParam, (IntPtr)0);
}
hi all,
how to find the name of all the processes receiving key board input.please help me on this…
Thanks in advance
Hi,
Ist id possible to trapp series of keystorkes before pressing enter key?
I want to trap series of keystrokes i.e "a1de"
How is it possible?
If anyone is still reading, this code has gotten me 90% of the way, but I’m trying to figure out how to capture input from a particular HID and let the others pass through.
Are these hooks at too high level of abstraction to do this?
Ideas?
I’m trying to use this code to output a different keyboard char than the one actually typed. The relevant message has been changed to this:
private static IntPtr HookCallback(int nCode, IntPtr wParam, IntPtr lParam)
{
if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN)
{
//Console.WriteLine((Keys)vkCode);
KBDLLHOOKSTRUCT replacementKey = new KBDLLHOOKSTRUCT();
Marshal.PtrToStructure(lParam, replacementKey);
replacementKey.vkCode = 90; // char ‘Z’
Marshal.StructureToPtr(replacementKey, lParam, true);
}
return CallNextHookEx(_hookID, nCode, wParam, lParam);
}
Unfortunately this outputs a "A first chance exception of type ‘System.ArgumentException’ occurred in foobar.exe" every time I press a key. Any idea?
No luck creating a Windows Service that would capture Hot Keys like Ctrl+Alt+[some char].
The service captures nothing. I tried that patch with the Registry, does not work. Although the code works for Windows Forms applications.
If you guys can find a solution for Windows services please share
Thank you
Nuren
Nurem >> try this:
Cool, Do you know how to do the (approx.) equivilant in Visual Basic?
Thanks for sharing this, I remembered the days of C++ and windows programming.
I am new to windows programing, and i am trying to trap the ALT+TAB, ALT+ESC.. keys and then disable there action, can you help me to get this done. I need to do it in VB.Net
I use this sample code in windows 7; and i replace Console.WriteLine((Keys)vkCode) by MessageBox.Show(Convert.ToString((Keys)vkCode)). when it runs, i press one key for 11 times, the hook keyboard is crashed!!!! I don't know why, can you help me plz??????
This is so helpful! Thanks for sharing.
Hi,
Thanks for the above mentioned code. I was actually searching for something that can hook the keyboard events only for one application i.e. powerpoint 2007.
In your code SetWindowsHookEx expects the parameter 0, therefore the hook is applied for all the applications. I do not want that. i only want the hook to be applicable with powerpoint 2007 application. Can you help?
Thanks for the piece of code it's extremely helpful! 🙂
HookCallback() is never called in .NET 4.0 on XP SP3. Am I missing something?
I know it isn't really a productive comment, but thanks a lot for this!
Hello, I wanted to see if it is possible to "modify" the hook so that it changes de key pressed, for example: if i press "a", it will display a "b"…
I would like to know what happens if I don't use the Application.Run(); , it seems that this call
is necessary to make it run.
But I can't compile using Application.Run() .
If I code a windows service how the code should be changed?
Thanks
@JCooke – I think this is too high-level for your needs.
I'm working on a project with some similar requirements and you'd need to look at using GetRawInputData from User32.dll.…/rawinput.aspx probably has what you need though.
Does any1 know if I can store this KeyboardHook code in seperate Class or Module ?
I dont want to mix all this KeyboardHook with other ode in my form so I would like to have it in external module or class file?
I belive this is simple, but I am not so familiar with it ..
how can i use the same for compact framework??
hi Stephen,
nice article!!! how can i perform this.. ex: first press A and then B after that AB becomes C..? (deleting both A and B)
press A -> A
press B -> C (generally this is AB)
cheers
my problem is that whe using low level key board hook in a console app, it can get the key information when the app lost focus which is expected, but when I change this code a little bit and make it works in a win form app, it can only get the key information just when the window get focus, but when it lost focus, it cannot get the key information anymore.
I am trying to run this application as task (on Task Scheduler) on Windows 7 but there is an issue which I couldn't figure out.
It might be because of either threading or security?
Has anyone run this code as a task or it is not possible?
Snow still underground rustling, more bitter cold air. When we looked around, neusoft wan to already became a XueSu ice sculpture of the world. Nomura HuangZhong, cold Lin haystack was draped thick.<a href="">women down jacket</a>
Thanks for this code.
Thanks, made my tool complete. 🙂
Hi
Is there a way to know which device send the input?
i need to identify the input source because i want to filter the barcode scanner
Hi,
Does this work for WinForm? I've tried it in winform and it's not working. If I comment out the Application.Start(); The winform will no capture the keys. If I leave Application.Start(); then the form will not show but the key capture fires. Is console app codes different than winform code? What am I missing?
Thanks,
ED
Error 1 The name 'SetHook' does not exist in the current context c:users****documentsvisual studio 2010ProjectstrycatchtrycatchProgram.cs 29 23 trycatch
wat to do?
Hi there,
I'm developing an application which is doing some things in a loop, and this loop have to breaked when any key is entered.
I tried the above code but the HookCallback method is called only after the loop.
Is there a way i can do any modify in this code to make it work for me ?
I looked into it :…/Processing-Global-Mouse-and-Keyboard-Hooks-in-C
but it uses a lot of its own classes which is not good.
VisualStudio 2010, C#, windows7, .NET 3.5
Excellent work! I'm using this to trap (and eat) some keypress combinations that are invalid under certain data conditions for my users in an accounting package they use. Very Pleased!
we made up a open source component in c# that uses hooks or polling (you can choose), way more convenient:…/superkeylogger
That was great
thanks
Im not sure if this will get answered? but im going to ask in any case.
Is there a way that we can measure the speed of the current users usage in terms of keyboard mouse activity?
not using a prescribed piece of text, like most typing-tutors do,
but more along the lines of how fast is the user using the keyboard and mouse independently and in combination?
regardless of what the user is actually doing.
I have used this code in windows service, But it is not detecting any key events.
Do you have any idea to handle this in windows service.
Hi. I'm making stopwatch using the shortcuts and it works when I press 2 times "Home" after opening the program. When I press one time or wait a little after running the program, it will start the stopwatch and nothing else. My code is as follows:
private static IntPtr HookCallback(int nCode, IntPtr wParam, IntPtr lParam) {
if (nCode >= 0 && wParam == (IntPtr)WM_KEYDOWN) {
vkCode = Marshal.ReadInt32(lParam);
if ((Keys)vkCode == Keys.Home)
{
stopwatch.Reset();
stopwatch.Start();
broken = 0;
start = DateTime.Now;
while(broken == 0) {
elapsed = stopwatch.Elapsed;
minutes = (int)elapsed.TotalMinutes;
fsec = 60 * (elapsed.TotalMinutes – minutes); //double
sec = (int)fsec;
tsOut = String.Format("{0}:{1:D2}", minutes, sec);
Console.Write("r{0}", tsOut);
start = DateTime.Now;
while (start.AddSeconds(1) >= DateTime.Now) //it runs for a second
{
CallNextHookEx(_hookID, nCode, wParam, lParam);
vkCode = Marshal.ReadInt32(lParam);
if ((Keys)vkCode == Keys.End)
broken = 1;
}
}
stopwatch.Reset();
CallNextHookEx(_hookID, nCode, wParam, lParam);
}
if ((Keys)vkCode == Keys.End)
{
stopwatch.Reset();
}
}
return CallNextHookEx(_hookID, nCode, wParam, lParam);
}
Would you help me with this?
can you gave the sample program sir of Low-Level Keyboard hook
it gonna help me a lot in my study
just send me please.. g.jhonmichael@gmail.com
thanks you sir….:D
Hi, with this code how can I hook an USB barcode reader knowing its HID?.
Regard.
Lenny
having same prob as you reg. ll keyb proc on excel2013, saw your posting on the guy with word2013 prob's, you asked for some help, did you get any answer? my mail is permortenjensen@hotmail.com
I need one help. i am running application in minized mode and override wndproce() method to capture all windows message.now problem start i do not get windows message when i double click on same application. can u help to figure out it.
Its working perfectly I changed it according to my application perspective. but the problem is when window is logged off and my application is still running behing (via thread) it stopped recording key strokes.
Why I fail to read the barcodes that created by this barcode solution()? Is this the barcode quiet zone setting issue?
barcode scanner sdk for c#.net…/csharp-barcode-reader.shtml
i have a tool where in i will have to lock the computer if the computer is left idle for 10min while the tool is still running.
Please help asap…
Thanks in advance!
Can anyone help ? how to host wcf service in windows service which does keystroke logged and saves it to a file.
Thank u very much Steph, u're an angel and u'll change the world!!!
ttomsen,
thank u so much ^_^
you helped me to run it in windows form application 🙂
it works 😉
thanks for sharing this valuable code…
Canany one help me with hooking DOUBLE tap
Hello! How I can it ?
switch ((Keys)vkCode)
{
case (Keys.C == (Keys)vkCode && Keys.Control == Control.ModifierKeys):
Console.WriteLine("CTRL+C: {0}",(Keys)vkCode);
break;
case (Keys.V == (Keys)vkCode && Keys.Control == Control.ModifierKeys):
Console.WriteLine("CTRL+V: {0}",(Keys)vkCode);
break;
etc
I am using the mute key to switch the keyboard to a different language. Is it possible to make the mute key do only this thing without disabling the sound? Thanks. | https://blogs.msdn.microsoft.com/toub/2006/05/03/low-level-keyboard-hook-in-c/ | CC-MAIN-2016-50 | refinedweb | 9,178 | 75.1 |
I was recently working on an application loading a big object using a custom formatter. A simple question came up: how big is this object in memory? It is clear that sizeof does not work. What are the alternatives?
The first one is to use Visual Studio 2008 Profiler and see how big are every object of the application. Since my application is pretty big, it could take long time before I have the full profile. I needed a simpler and faster way to measure the memory of that single object.
What about WinDbg and SOS extensions? I verified the symbols path were set in my Visual Studio environment and set a breakpoint at the line the object were created. Then from the immediate window I loaded SOS extensions and started looking at the statistics of all objects loaded in the managed heap:
!dumpheap -stat
The output of the above command provides the list of managed objects without considering the the inner objects. But that list provides also the methods table from which I can get the addess of the object I am interested on:
!dumpheap -mt 0x...
Finally I can get the real size of the object:
!objsize 1dd7a55c
Mission accomplished!
Buck)."
Visual Studio Team Test provides a set of asserts through three classes: Assert, StringAssert and CollectionAssert (all defined in Microsoft.VisualStudio.TestTools.UnitTesting namespace). All of them contains a long list of static methods to test several different kind of data in different ways.Even if there is a huge number of methods, there are many scenario that aren’t still covered. Let's see a simple example. We need to test if two instances of the same class are equals (contains the same data). Let's see the basic class:
public class MyClass{ public MyClass(int intProp) { this.intProp = intProp; }
public int IntProp { get { return intProp; } set { intProp = value; } }
private int intProp;}
Here is our test case:
Assert.AreEqual(new MyClass(12), new MyClass(12));
Executing the above test we will get a failure: "Assert.AreEqual failed. Expected:<MyLib.MyClass>, Actual:<MyLib.MyClass>".Why they aren’t equals? We are comparing tow different pointers, simple! Ok, but this isn’t the subject of the post, so, I’ll go ahead.In order to make the test case succesfull all you need to do is to override the method Equals in MyClass:
public override bool Equals(object obj){ if (!(obj is MyClass)) return false;
return ((MyClass)obj).intProp == intProp;}
I execute again and it works. Is this enough? Not really. Lets consider another test case:
Assert.AreEqual(new MyClass(14), new MyClass(12));
The test will fail as expected producing the following error message “Assert.AreEqual failed. Expected:<MyLib.MyClass>, Actual:<MyLib.MyClass>.”
Not really clear what’s wrong here. We would have some more clear idea of what are the differences, since we are testing with Assert.AreEqual. The idea is to implement a custom assert. An assert class is nothing more than a class with a bunch of static methods that checks the parameter values. If something is not correct (assertion not satisfied), they throws an exception (AssertFailedException). So, in our basic sample we would have something like:
public class MyClassAssert{ public static void AreEqual(MyClass expected, MyClass actual) { Assert.IsNotNull(expected); Assert.IsNotNull(actual);
if (expected.IntProp != actual.IntProp) { throw new AssertFailedException( string.Format( "MyClass.IntProp values are different.{0}Expected: {1}, Actual: {2}", Environment.NewLine, expected.IntProp, actual.IntProp)); } }}
Applying the above assertion to our test case:
MyClassAssert.AreEqual(new MyClass(14), new MyClass(12));
We will get a more clear error message:"MyClass.IntProp values are different.Expected: 14, Actual: 12"
Ok, the sample is super simple and it doesn’t make too much sense, but it give an idea of how to implement custom assertions for more complex ‘real’ types providing meaningful test messages.
UID (unique identification) generation is a really hot topic. It can be really simple as well as really complex. Before going deep in the subject let’s do a couple of simple samples.
To identify a person (citizen) you usually use the SSN (Social Security Number). Isn’t it true? Not at all, because if I don’t consider the domain, the above assertion is completely false. In fact, the SSN is used only in US. So, the SSN is a valid ID for US citizens but not for Chinese, Indians or anyone else.
If I want to identify a record in a table (relational database) I usually use a primary key, which can be simple or composed. But that key is valid only in the domain of the single table. It is pretty guaranteed that we can have another key value in some other table or database.
So, to generate a UID it is really important to try to follow some principles:
The first point is not really important if we are talking about a private system/application. Let’s consider the primary key used in the database for DB optimization purpose. If the key is a private stuff of the database/application, integer IDs works very well. But as soon as they become exposed to the public we start having some problems.
Point 2 is really important for debugging, troubleshooting and human interaction. Think about the IP address, people prefer to use the DNS. Easier and clearer than the IP address.
Point 3 is important for scalability. Some applications need to generate thousands of IDs per second, and we cannot have an algorithm that takes seconds to generate an ID. Think about the ID generated for the lottery transaction system.
Point 4 is important for the history. When I lived in Italy I discovered that the Telecom used to re-assign the same phone number after 6 months it has been discontinued. The result was that I received phone calls for scheduling dentist appointments. Reusing an ID is not a good think
Point 5 is important for developers, to guarantee the ID is correctly implemented and used. If you consider an ID as a string, who can guarantee you are not generating a numeric ID (converted to string) and on the other side expecting that ID to be a GUID? Only at runt-time you can discover such problems.
There are good sample of IDs that can match most of the above points, but it is less common to find ID generation that satisfy all of them. For example GUIDs (Global Unique Identifier) satisfy at least point 1, 3, 4 and 5. But they are not human readable (at least to me).
The Xml Schema ID can satisfy quite easily points 2 and 3, but we need to provide some wrapper facility to make them compatible to 1, 4 and 5. In fact the XML Schema specification requires having the ID unique at the document level (the domain is the document).
Integers, which are widely used in databases, satisfy only point 3.
Strings are widely used in account databases, to identify people (i.e. their SSN), or companies, etc. In fact the string can satisfy all points except 5.
Personally, I really like to idea of composite ID, where the compounds are the domain and the domain based ID. In C# we can use generics to manage IDs as I did in the following (untested) class:
public class Id<TId> : IEquatable<Id<TId>>{ public Id(string domain, TId id) : this(domain, id, "/") { }
public Id(string domain, TId id, string separator) { if (string.IsNullOrEmpty(domain)) throw new ArgumentNullException("domain"); if (id == null) throw new ArgumentNullException("id"); if (string.IsNullOrEmpty(separator)) throw new ArgumentNullException("separator");
this.domain = domain; this.id = id; this.separator = separator; }
public string Domain { get { return domain; } }
public TId DomainId { get { return id; } }
public string FullId { get { return this.ToString(); } }
public override int GetHashCode() { return domain.GetHashCode() ^ id.GetHashCode(); }
public override bool Equals(object obj) { if (!(obj is Id<TId>)) throw new ArgumentException("obj");
return Equals((Id<TId>)obj); }
public bool Equals(Id<TId> other) { return other.domain.Equals(this.domain, StringComparison.OrdinalIgnoreCase) && other.id.Equals(this.id); }
public override string ToString() { return string.Concat(domain, separator, id); }
public static bool operator ==(Id<TId> x, Id<TId> y) { return x.Equals(y); }
public static bool operator !=(Id<TId> x, Id<TId> y) { return !x.Equals(y); }
private string domain; private TId id; private string separator;}
In this way we can quite easly manage simple Ids (GUID, int, string) as well as composed Ids (all you need to do is to override ToString, Equals and GetHashCode).
Many times we design our special kind of bit field to manage bitwise operations on enums. Here a simple example:
[
When the number of values becomes higher, we have a little readibility problem with the value for each field. A possible alternative is to use the left-shift operator
[Flags]public enum ConsoleModiefiers{ Alt = 0x0001 << 1, Control = 0x0001 << 2, Shift = 0x0001 << 3}
In term of performance we don't pay any penalties since the IL code generated by C# compiler is exactly the same, but our code is probably more clear.
The :-)
It seems a good day for starting the blog :-) I don't have plan for any particular topic with this blog, but you'll quickly discover several thinks of me:
Let’s see, who I am. I’m software developer at MSTV in server team. IPTV Edition is a very interesting product where just few people can work with it, but milions of people use it :-) It's simply fun.
DisclaimerT.
So, see you soon ...
Trademarks |
Privacy Statement | http://blogs.msdn.com/pierreg/ | crawl-002 | refinedweb | 1,591 | 57.27 |
you can
import javax.ws.rs.core.Context;
and then
@Context
ServletContext rsContext;
This will result in the normal ServletContext getting injected. From there
you can get the request info, etc.
On Thu, Nov 27, 2014 at 10:05 PM, Colbert Philippe <
colbert.philippe@gmail.com> wrote:
> I am using Java with Apache CXF to write the backend for single-page web
> site (like AngularJS and others). In my REST service function, I need
> access to the header of the http request (i.e. parameters and cookies) and
> I need access to the response header also (i.e. parameters and cookies).
> I
> need to read and possibly write parameters and cookies. The reason for
> this are important! I need to implement security features and session
> management. Those are important reasons indeed!
>
>
>
> Is there a way of getting access to both of these structures from within a
> web service function in Apache CXF RESTfull code?
>
>
>
> If it is not possible at this point, I strongly recommend that you
> implement a solution. A solution might be to create some new dependency
> injection annotations that would give access to these structures like
> @RequestCookie, @RequestParam, @ResponseCookie, @ResponseParam
> | http://mail-archives.apache.org/mod_mbox/cxf-dev/201411.mbox/%3CCAL_=CgfC+Hp6juT4zCOeawXJH8msOdoWQs8fu_AM_zgaqnUSnw@mail.gmail.com%3E | CC-MAIN-2018-30 | refinedweb | 193 | 59.19 |
On Sat, 2009-12-19 at 12:41 -0500, Ray Strode wrote: > So one thing I tried is to create a custom topic branch (a la private- > branches in pkgs cvs), and pushing it failed. > > Is this something we want to support? Or should topic branches be > pushed to separate private respositories? We definitely want to allow topic branches pushed to the main repo. I think we'll have to agree on a namespace to use for these, perhaps following the dist-cvs example and call them private-*. -- Jesse Keating Fedora -- Freedom² is a feature! identi.ca:
Attachment:
signature.asc
Description: This is a digitally signed message part | http://www.redhat.com/archives/fedora-devel-list/2009-December/msg00905.html | CC-MAIN-2014-41 | refinedweb | 108 | 67.15 |
So I've been trying to implement jumping into my game. I got down the jumping itself, but I'm struggling with the implementation. I'm also completely new to programming btw. I tried to use a ground check to determine if the animation should be played or not, but that just meant that it would play the jumping animation for a split second, but then the idle/running animation again since the ground check was positive. This is a result of the ground check happening too fast. Then I tried using a trigger, but then realised that I don't know a way of cancelling the animation after it happened. I would prefer to have an animation for going upwards, and one for going downwards, like in the title. Anyone got a solution for me?
Answer by tormentoarmagedoom
·
May 02 at 10:05 AM
Good day.
This is one of the main fucntions of Unity Animations; the control of when they commence, if they merge, if they stop, etc..
You should spend some time looking some youtube tutorial about this. Dont be afraidod spending some hours learning how to correctly control animations. If you spend just 2-3 hours learning with attention, you will be a super pro!!
Good luck!
Answer by Jack-Mariani
·
May 02 at 12:41 PM
There are quite many options to implement this (assuming you're using a rigidbody2d).
Consider that we have 2 elements here.
The animator used to implement the right animations.
The Rigidbody2D to check where the player is moving.
Basically, I think you've some transition on your animator that is not properly set.
I would set an order of priorities like this.
if player is not moving => idle
if player is moving && he is on the ground => run
if player is not on the ground and moving up => jump
if player is not on the ground and moving down => fall
With the animator we consider that each state could lead to any of the other states.
So we will have something like this.
Each arrow will be just a trigger to a specific state.
After that you may use this code to send the right trigger at the right time.
[RequireComponent(typeof(Rigidbody2D), typeof(Animator))]
public class Rigidbody2DAnimations : MonoBehaviour
{
//to decide the minimal speed
public float speedThreshold = 0.1f;
//the rigidbody
private Rigidbody2D _rigidbody2D;
private Animator _animator;
//cache the rigidbody at start (or awake)
private void Start()
{
_rigidbody2D = GetComponent<Rigidbody2D>();
_animator = GetComponent<Animator>();
}
//I think the best place to handle animations is on LateUpdate
private void LateUpdate()
{
//gets the velocity as a vector
var velocity = _rigidbody2D.velocity;
// --------------- IDLE --------------- //
//if the current speed is lower than the treshold you can set animator is idle
if (velocity.magnitude <= speedThreshold)
{
_animator.SetTrigger("Idle");
return;
}
// --------------- FACING --------------- //
//apply here any logic related to move left or right
if (velocity.x > 0) FaceRight();
else FaceLeft();
// --------------- ON GROUND --------------- //
//if we are moving on the ground
if (IsOnTheGround())
_animator.SetTrigger("Running");
// --------------- JUMPING --------------- //
else if (!IsOnTheGround() &&
velocity.y > 0)
_animator.SetTrigger("Jumping");
// --------------- FALLING --------------- //
else if (!IsOnTheGround() &&
velocity.y < 0)
_animator.SetTrigger("Falling");
}
//ground check logic here
private bool IsOnTheGround() { return true; }
}
You may remove the falling animation and just replace the code with this:
else if (!IsOnTheGround())
_animator.SetTrigger("Jumping");
This code just show you the logic and it's quite simple (there are too many calls of IsOnTheGround(), you may just keep the first one.
I'm being verbose just to let you understand the logic.
I suggest you to optimize both code and transitions if you want a more complex animation system.
Further.
Help with 8-DOM Weapon Sprite Issue
0
Answers
How do I stop an animation from looping?
1
Answer
How does Animator Culling work?
1
Answer
HELP,Watched tutorial
1
Answer
How do i extend a animation ?
1
Answer | https://answers.unity.com/questions/1627577/how-do-i-play-an-animation-when-jumping-and-then-p.html | CC-MAIN-2019-51 | refinedweb | 636 | 56.25 |
Hi all! I have a problem - after restart all nodes, aerospike delete all objects. Why?
Why Aerospike delete all objects?
In your other post, you had given the configuration of your database. The issue is that there is no line telling the database whether or not to read from the persistence file. By default it is “false”, if it has not been set. This means that the database will ignore the persistence file. You can turn it on by adding the “load-at-startup” value in the configuration file.
Note that it will take some time to start the node when it reads from the file.
namespace NAMESPACE { … storage-engine device { … data-in-memory true load-at-startup true } … } | https://discuss.aerospike.com/t/why-aerospike-delete-all-objects/155 | CC-MAIN-2018-30 | refinedweb | 119 | 75.81 |
LiveView TodoMVC - Part 2: params and hooks
Alessandro Mencarini
・5 min read
Welcome back!
In Part 1 we built an almost fully functional version of the classic TodoMCV tutorial by simply using Phoenix LiveView.
There are a couple of bits of functionality that we haven't covered: the "Active, Completed, All" filter in the footer, and the ability to double-click an item to edit it.
Let's tackle them!
Passing parameters to a live view
To make the links in the footer actually do something, you'll need to use the
live_path function as their destination:
<ul class="filters"> <li> <%= live_link "All", to: Routes.live_path(@socket, TodoMVCWeb.MainLive, %{filter: "all"}), class: selected_class(@filter, "all") %> </li> <li> <%= live_link "Active", to: Routes.live_path(@socket, TodoMVCWeb.MainLive, %{filter: "active"}), class: selected_class(@filter, "active") %> </li> <li> <%= live_link "Completed", to: Routes.live_path(@socket, TodoMVCWeb.MainLive, %{filter: "completed"}), class: selected_class(@filter, "completed") %> </li> </ul>
Again, we're using a helper function to decide whether the link should have the
selected CSS class: we'll define it in the view, by using a pattern matching trick:
# lib/todo_mvc_web/views/main_view.ex def selected_class(filter, filter), do: "selected" def selected_class(_current_filter, _filter), do: ""
To break it down: if the first and second argument we're passing to
selected_class/2 are the same string, it'll return the
"selected" string; an empty string will be returned otherwise.
We're now referring to a
@filter assign in the template, and we'll need to initialise it in the live view for the app to compile. Change the
mount function:
# lib/todo_mvc_web/live/main_live.ex def mount(_params, socket) do {:ok, assign(socket, todos: [], filter: "all")} end
Now: how do we capture the value that's passed by query string? We need a
handle_params callback in the live view!
# lib/todo_mvc_web/live/main_live.ex def handle_params(%{"filter" => filter}, _uri, socket) do {:noreply, assign(socket, filter: filter)} end def handle_params(_params, _uri, socket) do {:noreply, socket} end
This will be enough to get the selected filter to be highlighted. But what about actually filtering the todos? We'll use the template and its view for this.
Filter the todos
Start by defining a helper function that given a todo and our current filter will say whether a todo should be visible:
def todo_visible?(_todo, "all"), do: true def todo_visible?(%{state: state}, state), do: true def todo_visible?(_, _), do: false
Again, we use some pattern matching magic: if the filter is set to
"all", we're sure the todo needs to be visible; if the filter is set to
"active" and so is the state of the todo, show it; otherwise hide the todo.
Now, an elegant way to incorporate this change is to use a comprehension filter! In the template, change the start of the
ul like this:
<ul class="todo-list"> <%= for todo <- @todos, todo_visible?(todo, @filter) do %> ...
And that's it! You can now click around and only see the todos in the desired state.
Edit a todo
Phoenix LiveView does not (currently?) support binding an event to a double-click. To deal with this, we'll have to write a live view hook and... some JavaScript!
We'll start by defining the hook on the
li that hosts the todo:
<%= content_tag :li, class: todo_classes(todo), phx_hook: "Todo" do %>
Also add this at the bottom of the
li:
<form phx- <input class="edit" name="title" phx- </form>
We'll use the form to actually capture the changes to the todo text.
The function
todo_classes needs to be amended to allow for the
editing CSS class:
# lib/todo_mvc_web/views/main_view.ex def todo_classes(todo) do [ if(todo.editing, do: "editing"), if(todo.state == "completed", do: "completed") ] |> Enum.reject(&is_nil/1) |> Enum.join(" ") end
Now, for the big reveal! Go to
assets/js/app.js and add this:
let Hooks = {} Hooks.Todo = { mounted() { this.el.addEventListener("dblclick", e => { const toggle = this.el.querySelector(".toggle") this.pushEvent("edit", { "todo-id": toggle.getAttribute("phx-value-todo-id") }) }) }, updated() { const edit = this.el.querySelector(".edit") edit.focus() edit.setSelectionRange(edit.value.length, edit.value.length); } }
What's happening here? We're asking LiveView to help us with two things:
- When one of our
lis with the
phx-hook="Todo"attribute gets mounted, we want to set up a listener for double clicks that will send an
"edit"event down the wire, together with the todo id that we'll pick up from the checkbox of the
liwe double-clicked;
- When one of the
lis gets re-rendered, we want its
edittext input child to be in focus, and we want the cursor to be at the end of its contests.
(I'm not 100% happy with this, so if you have suggestions on how to improve this, please leave a comment!)
You'll also need to change the
liveSocket initialisation:
let liveSocket = new LiveSocket("/live", Socket, { hooks: Hooks })
We'll have to add a bunch of event handlers to the live view for this to work!
# lib/todo_mvc_web/live/main_live.ex def handle_event("edit", %{"todo-id" => id}, socket) do toggle_editing = fn %Todo{id: ^id} = todo -> %{todo | editing: true} todo -> todo end todos = socket.assigns[:todos] |> Enum.map(toggle_editing) {:noreply, assign(socket, todos: todos)} end def handle_event("change", %{"title" => text}, socket) do update_text = fn %Todo{editing: true} = todo -> %{todo | text: text} todo -> todo end todos = socket.assigns[:todos] |> Enum.map(update_text) {:noreply, assign(socket, todos: todos)} end def handle_event("stop-editing", %{"todo-id" => id}, socket) do toggle_editing = fn %Todo{id: ^id} = todo -> %{todo | editing: false} todo -> todo end todos = socket.assigns[:todos] |> Enum.map(toggle_editing) {:noreply, assign(socket, todos: todos)} end
That's quite a lot of code! Let's break it down a bit.
editis triggered by the JS hook and sets up the input text field to be visible by changing the
editingfield of a todo
changeis triggered by changes in the input text field and when the form gets submitted and sets the text of the todo to whatever is passed
stop-editingis set on blur (i.e.: when the input text field loses focus, because of the form being submitted or clicks in other areas of the page) and sets the todo field
editingto
false
Phew! That was a lot of work!
Let's move for the last bit of UI.
Todo counter with pluralisation
Add this at the start of the footer:
<span class="todo-count"> <%= left_count_label(@todos) %> </span>
Again, we'll do the heavy lifting in a view function!
# lib/todo_mvc_web/views/main_view.ex def left_count_label(todos) do ngettext( "1 item left", "%{count} items left", Enum.count(todos, fn t -> t.state == "active" end) ) end
For the pluralisation of the "items left" counter we rely on a
gettext facility. It's great we can leverage it even when we are not going to translate any locale!
Typespecs
To wrap things up, I'd suggest to add (Typespecs)[] as an exercise. They're great as documentation and they can assist catching potential errors when running dialyzer (most likely through the brilliant (dialyxir)[] package).
Just to get you started, as an example, here's a specced version of
handle_event in the live view:
@spec handle_event(binary, map, Phoenix.LiveView.Socket.t()) :: {:noreply, Phoenix.LiveView.Socket.t()}
When you have multiple heads of a function, remember you only need a spec before the first head.
And that's all for the TodoMVC with LiveView! I hope you found this a useful way of starting out with this amazing library! | https://dev.to/amencarini/liveview-todomvc-part-2-params-and-hooks-2l5 | CC-MAIN-2019-47 | refinedweb | 1,248 | 54.63 |
 or Stack Overflow and other sites to search for solutions.Â
Another important, but sometimes painful, topic is that of project setup. It is a necessary evil that needs to be done in the beginning of a project, but getting this right early on can reduce a lot of friction as your application grows with you. Therefore, a large part of this chapter is dedicated to demystifying and enabling you as a developer to save you from future frustrations and migraines.
We will also be able to create our first application at the end of this chapter and get a feel for the anatomy of an Angular application. To sum up, here are the main themes that we will explore in this chapter.
In this chapter, we will:
- Learn about semantic versioning, why it matters, and Angular's take on it
- Discover how we set up our project using Angular CLI
- Create our first application and begin to understand the core concepts in Angular
Using semantic versioning is about managing expectations. It's about managing how the user of your application, or library, will react when a change happens to it. Changes will happen for various reasons, either to fix something broken in the code or add/alter/remove a feature. The way authors of frameworks or libraries use to convey what impact a certain change has is by incrementing the version number of the software.
A production-ready software usually has version 1.0 or 1.0.0 if you want to be more specific.
There are three different levels of change that can happen when updating your software. Either you patch it and effectively correct something. Or you make a minor change, which essentially means you add functionality. Or lastly you make a major change, which might completely change how your software works. Let's describe these changes in more detail in the following sections.
A patch change means we increment the right most digit by one. Changing the said software from 1.0.0 to 1.0.1 is a small change, usually a bug fix. As a user of that software you don't really have to worry; if anything, you should be happy that something is suddenly working better. The point is, you can safely start using 1.0.1.
This means the software is increased from 1.0.0 to 1.1.0. We are dealing with a more severe change as we increase the middle digit by one. This number should be increased when functionality is added to the software and it should still be backwards compatible. Also in this case it should be safe adapting the 1.1.0 version of the software.
At this stage, the version number increases from 1.0.0 to 2.0.0. Now this is where you need to look out. At this stage, things might have changed so much that constructs have been renamed or removed. It might not be compatible to earlier versions. I'm saying it might because a lot of software authors still ensure that there is a decent backwards compatibility, but the main point here is that there is no warranty, no contract, guaranteeing that it will still work.
The first version of Angular was known by most people as Angular 1; it later became known as AngularJS. It did not use semantic versioning. Most people actually still refer to it as Angular 1.
Then Angular came along and in 2016 it reached production readiness. Angular decided to adopt semantic versioning and this caused a bit of confusion in the developer community, especially when it was announced that there would be an Angular 4 and 5, and so on. Google, as well as the Google Developer Experts, started to explain to people that it wanted people to call the latest version of the framework Angular - just Angular. You can always argue on the wisdom of that decision, but the fact remains, the new Angular is using semantic versioning. This means Angular is the same platform as Angular 4, as well as Angular 11, and so on, if that ever comes out. Adopting semantic versioning means that you as a user of Angular can rely on things working the same way until Google decides to increase the major version. Even then it's up to you if you want to remain on the latest major version or want to upgrade your existing apps.
As mentioned before, Angular represents a full rewrite of the AngularJS framework, introducing a brand new application architecture completely built from scratch in TypeScript, a strict superset of JavaScript that adds optional static typing and support for interfaces and decorators.
In a nutshell, Angular applications are based on an architecture design that comprises of trees of web components interconnected by their own particular I/O interface. Each component takes advantage under the covers of a completely revamped dependency injection mechanism.
To be fair, this is a simplistic description of what Angular really is. However, the simplest project ever made in Angular is cut out by these definition traits. We will focus on learning how to build interoperable components and manage dependency injection in the next chapters, before moving on to routing, web forms, and HTTP communication. This also explains why we will not make explicit references to AngularJS throughout the book. Obviously, it makes no sense to waste time and pages referring to something that will not provide any useful insights on the topic, besides the fact we assume that you might not know about Angular 1.x, so such knowledge does not have any value here.:
<input id="mySlider" type="range" min="0" max="100" step="10">set required for delivering this very same functionality, so we can build our own custom elements (input controls, personalized tags, and self-contained widgets) featuring the inner HTML markup of our choice and our very own style sheet that does not affect (nor is impacted) by the CSS of the page hosting our component. in this book, as we will see further down the line in this book. and actually recommend its use because of its higher expressivity thanks to type annotations, and its neat way of approaching dependency injection based on type introspection out of such type annotations.
There are different ways to get started, either using the Angular quickstart repository on the site, or installing the scaffolding tool Angular CLI, or lastly, you could use Webpack to set up your project. It is worth pointing out that the standard way of creating a new Angular project is through using Angular CLI and scaffold your project. Systemjs, used by the quickstart repository, is something that used to be the default way of building Angular projects. It is now rapidly diminishing, but it is still a valid way of setting up an Angular project. The interested reader is therefore recommended to check the Appendix A, SystemJS for more information on it.
Setting up a frontend project today is more cumbersome than ever. We used to just include the necessary script with our JavaScript code and a
link tag for our CSS and
img tag for our into.
What you need to get started is to have Git and Node.js installed. Node.js will also install something called NPM, a node package manager that you will use later to install files you need for your project. After this is done, you are ready to set up your Angular application. You can find installation files to Node.js atÂ.
The easiest way to have it installed is to go to the site:
Installing Node.js will also install something called NPM, Node Package Manager, which you will need to install dependencies and more. The Angular CLI requires Node 6.9.0 and NPM 3 or higher. Currently on the site, you can choose between an LTS version and the current version. The LTS version should be enough.
Once the Angular CLI is in place the time has come to create your first project. To do so place yourself in a directory of your choice and type the following:
ng new <give it a name here>
Type the following:
ng new TodoApp
This will create a directory calledÂ
TodoApp. After you have run the preceding command, there are twoÂ:
The Angular CLI doesn't just come with code that makes your app work. It also comes with code that sets up testing and includes a test. Running the said test is as easy as typing the following in the Terminal:
npm test
You should see the following:
How come this works? Let's have a look at the
package.json file that was just created and the
scripts tag. Everything specified here can be run using the following syntax:
npm run <key>
In some cases, it is not necessary to type
run and it will be enough to just type:
npm <key>
This is the case with theÂ
start and
testÂ).
First off, let's import the component decorator:
import { Component } from '@angular/core';
Then create the class for your component:
class AppComponent { title:string = 'hello app'; }
Then decorate your class using theÂ
Component decorator:
@Component({ selector: 'app', template: `<h1>{{ title }}</h1>` }) export class AppComponent { title: string = 'hello app'; }
We give theÂ
Component decorator, which is function, an object literal as an input parameter. The object literal consists at this point of theÂ
selector andÂ
template keys, so let's explain what those are.
AÂ
selector is what it should be referred to if used in a template somewhere else. As we call itÂ
app, we would refer to it as:
<app></app>
TheÂ
template orÂ
templateUrl is your view. Here you can write HTML markup. Using the Â
template  keyword, in our object literal, means we get to define the HTML markup in the same file as the component class. Were we to useÂ
templateUrl, we would then place our HTML markup in a separate file.
The preceding  example also lists the following double curly braces, in the markup:
<h1>{{ title }}</h1>
This will be treated as an interpolation and the expression will be replaced with the value ofÂ
AppComponent'sÂ
title field. The component, when rendered, will therefore look like this:
hello app
Now we need to introduce a completely new concept, an Angular module. All types of constructs that you create in Angular should be registered with a module. An Angular module serves as a facade to the outside world and it is nothing more than a class that is decorated by the decorateÂ
@NgModule. Just like theÂ
@Component decorator, theÂ
@NgModule decorator takes an object literal as an input parameter. To register our component with our Angular module, we need to give the object literal the propertyÂ
declarations. TheÂ
declarations property is of a type array and by adding our component to that array we are registering it with the Angular module.Â
The following code shows the creation of an Angular module and the component being registered with it by being added toÂ
declarations keyword array:
import { AppComponent } from './app.component'; @NgModule({ declarations: [AppComponent] }) export class AppModule {}
At this point, our Angular module knows about the component. We need to add one more property to our module,Â
bootstrap. TheÂ
bootstrap keyword states that whatever is placed in here serves as the entry component for the entire application. Because we only have one component, so far, it makes sense to register our component with thisÂ
bootstrap keyword:
@NgModule({ declarations: [AppComponent], bootstrap: [AppComponent] }) export class AppModule {}
It's definitely possible to have more than one entry component, but the usual scenario is that there is only one.Â
For any future components, however, we will only need to add them to theÂ
declarationsÂ.Â
The
main.ts file is your bootstrap file and it should have the following content:
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic().bootstrapModule(AppModule);
What we do in the preceding code snippet is to provide the recently created module as an input parameter to the method callÂ
bootstrapModule(). This will effectively make the said module, the entry module of the application. This is all we need to create a working application. Let's summarize the steps we took to accomplish that:
- Create a component.
- Create a module and register our created component in its declaration property.
- Also register our component in the modules bootstrap property to make it serve as an application entry point. Future components we create just need to be added to the
declarationsproperty.
- Bootstrap our created module by using the said module as an input parameter to the
bootstrapModule()method.
You as a reader have had to swallow a lot of information at this point and take our word for it. Don't worry, you will get a chance to get more acquainted with components in this chapter as well as Angular modules in upcoming chapters. For now, the focus was just to get you up and running by giving you a powerful tool in the form of the Angular CLI and show you how few steps are actually needed to have an app rendered to the screen.
We have come a long way now, from tapping on TypeScript for the first time to learning how to code the basic scripting schema of an Angular component. However, before jumping into more abstract topics, let's try to build another component so we really get the hang of how creating it really works.
Create a new
timer.component.ts file in the same folder and populate it with the following basic implementation of a very simple component. Don't worry about the added complexity, as we will review each and every change made after the code block:
import { Component } from '@angular/core'; @Component({ selector: 'timer', template: `<h1>{{ minutes }}:{{ seconds }} </h1>>` }) export class TimerComponent { minutes: number; seconds: number; constructor(){ this.minutes = 24; this.seconds = 59; } }
At this point, we have created a whole new component by creating the
TimerComponent class and decorated it with
@Component, just as we learned how to do in a previous section. We learned in the previous section that there is more to be done, namely to tell an Angular module that this new component exists. The Angular module is already created so you just need to add our fresh new component to itsÂ
declarations property, like so:
@NgModule({ declarations: [ AppComponent, TimerComponent ], bootstrap: [AppComponent] })
As long as we only had theÂ
AppComponent we didn't really see the point of having an Angular module. With two components registered with our module, this changes. When a component is registered with an Angular module it becomes available to other constructs in the module. It becomes available to theirÂ
template/templateUrl. This means that we can haveÂ
TimerComponent rendered inside of ourÂ
AppComponent.
Let's therefore go back to ourÂ
AppComponent file and update its template to show just that:
@Component({ selector: 'app', template: `<h1>{{ title }}</h1> <timer></timer>` }) export class AppComponent { title: string = 'hello app'; }
In the preceding code, we highlight in bold how we add theÂ
TimerComponent to theÂ
AppComponents template. Or rather we refer to theÂ
TimerComponent by itsÂ
selector property name, which isÂ
timer.
Let's show theÂ
TimerComponent again, in it's entirety, and highlight theÂ
selector property because this is a really important thing to understand; that is, how to place a component in another component:
import { Component } from '@angular/core'; @Component({ selector: 'timer', template: `<h1>{{ minutes }}:{{ seconds }} </h1>>` }) export class TimerComponent { minutes: number; seconds: number; constructor(){ this.minutes = 24; this.seconds = 59; } }() { if(--this.seconds < 0) { this.seconds = 59; if(--this.minutes < 0) { this.minutes = 24; this.seconds = 59; } } }
Note
Selectors in Angular are case sensitive. As we will see later in this book, components are a subset of directives that can support a wide range of selectors. When creating components, we are supposed to set a custom tag name in the
selector property by enforcing a dash-casing naming convention. When rendering that tag in our view, we should always close the tag as a non-void element. SoÂ
<custom-element></custom-element>Â is correct, whileÂ
<custom-element />Â will trigger an exception. Last but not least, certain common camel case names might conflict with the Angular implementation, so avoid them., which we will cover in more detail in Chapter 3,(); setInterval(() => this.tick(), 1000); } reset() { this.minutes = 24; this.seconds = 59; } private tick() { if(--this.seconds < 0) { this.seconds = 59; if(--this.minutes < 0) { this.reset(); } } } get in the way. We need to provide some sort of interactivity so the user can start, pause, and resume the current Pomodoro timer.
Angular provides top-notch support for events through a declarative interface. This means it is easy to hook up events and have the point to method. It's also easy to bind data to different HTML attributes, as you are about to learn.
Let's first modify our template definition:
@Component({ selector: 'timer', template: ` <h1>{{ minutes }}: {{ seconds }} </h1> <p> <button (click)="togglePause()"> {{ buttonLabel }}</button> </p> ` })
We used a multiline text string! ECMAScript 6 introduced the concept of template strings, which are string literals with support for embedded expressions, interpolated text bindings, and multiline content. We will look into them in more detail in Chapter 3, Introducing TypeScript.
In the meantime, just focus on the fact that we introduced a new chunk of HTML that contains a button with an event handler that listens to click events and executes the
togglePause()method upon clicking. ThisÂ
(click)Â attribute is something you might not have seen before, even though it is fully compliant with the W3C standards. Again, we will cover this in more detail in Chapter 4, Implementing Properties and Events in Our Components. Let's focus on the
togglePause()method and the new
buttonLabelbinding. First, let's modify our class properties so that they look like this:
export class TimerComponent { minutes: number; seconds: number; isPaused: boolean; buttonLabel: string; // rest of the code will remain as it is below this point }
We introduced two new fields. The first is
buttonLabel, which contains the text that will later on be displayed on our newly-created button.
isPaused is a newly-created variable that will assume aÂ
true/
false value, depending on the state of our timer. So, we might need a place to toggle the value of such a field. Let's create the
togglePause()Â method we mentioned earlier:
togglePause() { this.isPaused = !this.isPaused; // if countdown has started if(this.minutes < 24 || this.seconds < 59) { this.buttonLabel = this.isPaused ? 'Resume' : 'Pause'; } }
In a nutshell, the
togglePause() method just switches the value of
isPaused to its opposite and then, depending on such a new value and whether the timer has started (which would entail that any of the time variables has a value lower than the initialisation value) or not, we assign a different label to our button.
Now, we need to initialize these values, and it seems there is no better place for it. So, the
reset()Â function is the place where variables affecting the state of our class are initialized:
reset() { this.minutes = 24; this.seconds = 59; this.buttonLabel = 'Start'; this.togglePause(); }
By executing
togglePause() every time, we reset it the to make sure that whenever it reaches a state where it requires to be reset, the countdown behavior will switch to the opposite state it had previously. There is only one tweak left in the controller method that handles the countdown:
private tick() { if(!this.isPaused) { this.buttonLabel = 'Pause'; if(--this.seconds < 0) { this.seconds = 59; if(--this.minutes < 0) { this.reset(); } } } }.
So far, we have reloaded the browser and played around with the newly created toggle feature. However, there is apparently something that still requires some polishing: when the seconds counter is less than 10, it displays a single-digit number instead of the usual two-digit numbers we are used to seeing in digital clocks and watches. Luckily, Angular implements a set of declarative helpers that format the data output in our templates. We call them pipes, and we will cover them in detail later in Chapter 4, Implementing Properties and Events in Our Components. For the time being, let's just introduce the number pipe in our component template and configure it to format the seconds output to display two digits all the time. Update our template so that it looks like this:
@Component({ selector: 'timer', template: ` <h1>{{ minutes }}: {{ seconds | number: '2.0' }}</h1> 6, Building an Application with Angular style sheet we downloaded through npm when installing the project dependencies. Open
timer.html and add this snippet at the end of the
<head> element:
<link href="" rel="stylesheet"">
Now, let's beautify our UI by inserting a nice page header right before our component:
<body> <nav class="navbar navbar-default navbar-static-top"> <div class="container"> <div class="navbar-header"> <strong class="navbar-brand">My Timer</strong> </div> </div> </nav> </body>
Tweaking the component button with a Bootstrap button class will give it more personality and wrapping the whole template in a centering container will definitely compound up the UI. So let's update the template in our template to look like this:
<div class="text-center"> <img src="assets/img/timer.png" alt="Timer"> <h1> {{ minutes }}:{{ seconds | number:'2.0' }}</h1> <p> <button class="btn btn-danger" (click)="togglePause()">{{ buttonLabel }}</button> </p> </div>
We looked at web components according to modern web standards and how Angular components provide an easy and straightforward API to build our own components. We covered TypeScript and some basic traits of its syntax as a preparation for Chapter 3, Introducing TypeScript. We saw how to set up our working space and where to go to find the dependencies we need to bring TypeScript into the game and use the Angular library in our projects, going through the role of each dependency in our application.
Our first component taught us the basics of creating a component and also allowed us to get more familiar with another important concept, Angular modules, and also how to bootstrap the application. Our second component gave us the opportunity to discuss the form of a controller class containing property fields, constructors, and utility functions, and why metadata annotations are so important in the context of Angular applications to define how our component will integrate itself in the HTML environment where it will live. in no time.
 | https://www.packtpub.com/product/learning-angular-second-edition/9781787124929 | CC-MAIN-2020-40 | refinedweb | 3,756 | 53.21 |
Question:
I am trying to use this code for the Porter stemming algorithm in a C++ program I've already written. I followed the instructions near the end of the file for using the code as a separate module. I created a file, stem.c, that ends after the definition and has
extern int stem(char * p, int i, int j) ...
It worked fine in Xcode but it does not work for me on Unix with gcc 4.1.1--strange because usually I have no problem moving between the two. I get the error
ld: fatal: symbol `stem(char*, int, int)' is multiply-defined: (file /var/tmp//ccrWWlnb.o type=FUNC; file /var/tmp//cc6rUXka.o type=FUNC); ld: fatal: File processing errors. No output written to cluster
I've looked online and it seems like there are many things I could have wrong, but I'm not sure what combination of a header file, extern "C", etc. would work.
Solution:1
That error means that the symbol (stem) is defined in more than one module.
You can declare the symbol in as many modules as you want. A declaration of a function looks like this:
int stem(char * p, int i, int j);
You don't need the "extern" keyword, although it doesn't hurt anything. For functions declarations, it's implied.
A definition of a function looks like this:
int stem(char * p, int i, int j) { /* body of your function */ }
The "multiply-defined" error indicates that you have two modules with a definition for the same function. That usually means that you have two files that define the function, or two files that #include a file that defines the function. Normally, you should not put function definitions in files that you #include. Put the definition in a .c, .cpp, or .cc file and just put a declaration in a .h file that you #include.
For example, you could create a stem.h file with this in it:
int stem(char * p, int i, int j);
Then,
#include "stem.h".
Solution:2
The fact that Whatever.cpp has #include "stem.c" provides the first definition, and specifying stem.c on the compiler command line provides the second definition.
You should break up stem.c into a header file (With just function prototypes) and a .c file which contains the implemention. Include only the header file in Whatever.cpp
Solution:3
You need to add "C". You need to extern "C" { ... } and only actually define the function once. But you can declare it (the prototype) as often as you like.
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2019/04/tutorial-use-c-function-in-c-program.html | CC-MAIN-2019-18 | refinedweb | 452 | 67.96 |
Testing React components
Have peace of mind when using React Apollo in production
Running tests against code meant for production has long been a best practice. It provides additional security for the code that's already written, and prevents accidental regressions in the future. Components utilizing React Apollo, the React implementation of Apollo Client, are no exception.
Although React Apollo has a lot going on under the hood, the library provides multiple tools for testing that simplify those abstractions, and allows complete focus on the component logic. These testing utilities have long been used to test the React Apollo library itself, so they will be supported long-term.
An introduction
The React Apollo library relies on React's context to pass the
ApolloClient instance through the React component tree. In addition, React Apollo makes network requests in order to fetch data. This behavior affects how tests should be written for components that use React Apollo.
This guide will explain step-by-step how to test React Apollo code. The following examples use the Jest testing framework, but most concepts should be reusable with other libraries. These examples aim to use as simple of a toolset as possible, so React's test renderer will be used in place of React-specific tools like Enzyme and react-testing-library.
Note: As of React Apollo 3, all testing utilities can now be found in their own
@apollo/react-testingpackage.
Consider the component below, which makes a basic query, and displays its results:
import React from 'react'; import gql from 'graphql-tag'; import { useQuery } from '@apollo/react-hooks'; // Make sure the query is also exported -- not just the component export const GET_DOG_QUERY = gql` query getDog($name: String) { dog(name: $name) { id name breed } } `; export function Dog({ name }) { const { loading, error, data } = useQuery( GET_DOG_QUERY, { variables: { name } } ); if (loading) return <p>Loading...</p>; if (error) return <p>Error!</p>; return ( <p> {data.dog.name} is a {data.dog.breed} </p> ); }
Given this component, let's try to render it inside a test, just to make sure there are no render errors:
// Broken because it's missing Apollo Client in the context it('should render without error', () => { renderer.create(<Dog name="Buck" />); });
This test would produce an error because Apollo Client isn't available on the context for the
useQuery Hook to consume.
In order to fix this we could wrap the component in an
ApolloProvider and pass an instance of Apollo Client to the
client prop. However, this will cause the tests to run against an actual backend which makes the tests very unpredictable for the following reasons:
- The server could be down.
- There may be no network connection.
- The results are not guaranteed to be the same for every query.
// Not predictable it('renders without error', () => { renderer.create( <ApolloProvider client={client}> <Dog name="Buck" /> </ApolloProvider>, ); });
MockedProvider
The
@apollo/react-testing package exports a
MockedProvider component which simplifies the testing of React components by mocking calls to the GraphQL endpoint. This allows the tests to be run in isolation and provides consistent results on every run by removing the dependence on remote data.
By using this
MockedProvider component, it's possible to specify the exact results that should be returned for a certain query using the
mocks prop.
Here's an example of a test for the above
Dog component using
MockedProvider, which shows how to define the mocked response for
GET_DOG_QUERY:
// dog.test.js import { MockedProvider } from '@apollo/react-testing'; // The component AND the query need to be exported import { GET_DOG_QUERY, Dog } from './dog'; const mocks = [ { request: { query: GET_DOG_QUERY, variables: { name: 'Buck', }, }, result: { data: { dog: { id: '1', name: 'Buck', breed: 'bulldog' }, }, }, }, ]; it('renders without error', () => { renderer.create( <MockedProvider mocks={mocks} addTypename={false}> <Dog name="Buck" /> </MockedProvider>, ); });
The
mocks array takes objects with specific
requests and their associated
results. When the provider receives a
GET_DOG_QUERY with matching
variables, it returns the corresponding object from the
result key. A
result may alternatively be a function returning the object:
const mocks = [ { request: { query: GET_DOG_QUERY, variables: { name: 'Buck', }, }, result: () => { // do something, such as recording that this function has been called // ... return { data: { dog: { id: '1', name: 'Buck', breed: 'bulldog' }, }, } }, }, ];
Your mock request's variables object must exactly match the query variables sent from your component.
addTypename
You may notice the prop being passed to the
MockedProvider called
addTypename. The reason this is here is because of how Apollo Client normally works. When a request is made with Apollo Client normally, it adds a
__typename field to every object type requested. This is to make sure that Apollo Client's cache knows how to normalize and store the response. When we're making our mocks, though, we're importing the raw queries without typenames from the component files.
If we don't disable the adding of typenames to queries, the imported query won't match the query actually being run by the component during our tests.
In short, if queries are lacking
__typename, it's important to pass the
addTypename={false}prop to the
MockedProviders.
Testing loading states
In this example, the
Dog component will render, but it will render in a loading state, not the final response state. This is because
MockedProvider doesn't just return the data but instead returns a
Promise that will resolve to that data. By using a
Promise it enables testing of the loading state in addition to the final state:
it('should render loading state initially', () => { const component = renderer.create( <MockedProvider mocks={[]}> <Dog /> </MockedProvider>, ); const tree = component.toJSON(); expect(tree.children).toContain('Loading...'); });
This shows a basic example test that tests the loading state of a component by checking that the children of the component contain the text
Loading.... In an actual application, this test would probably be more complicated, but the testing logic would be the same.
Testing final state
Loading state, while important, isn't the only thing to test. To test the final state of the component after receiving data, we can just wait for it to update and test the final state.
const wait = require('waait'); it('should render dog', async () => { const dogMock = { request: { query: GET_DOG_QUERY, variables: { name: 'Buck' }, }, result: { data: { dog: { id: 1, name: 'Buck', breed: 'poodle' } }, }, }; const component = renderer.create( <MockedProvider mocks={[dogMock]} addTypename={false}> <Dog name="Buck" /> </MockedProvider>, ); await wait(0); // wait for response const p = component.root.findByType('p'); expect(p.children).toContain('Buck is a poodle'); });
Here, you can see the
await wait(0) line. This is a utility function from the
waait npm package. It delays until the next "tick" of the event loop, and allows time for that
Promise returned from
MockedProvider to be fulfilled. After that
Promise resolves (or rejects), the component can be checked to ensure it displays the correct information — in this case, "Buck is a poodle".
For more complex UI with heavy calculations, or delays added into its render logic, the
wait(0) will not be long enough. In these cases, you could either increase the wait time or use a package like
wait-for-expect to delay until the render has happened. The risk of using a package like this everywhere by default is that every test could take up to five seconds to execute (or longer if the default timeout has been increased).
Testing error states
Since they can make or break the experience a user has when interacting with the app, error states are one of the most important states to test, but are often less tested in development.
Since most developers would follow the "happy path" and not encounter these states as often, it's almost more important to test these states to prevent accidental regressions.
To simulate a network error, an
error property can be included on the mock, in place of or in addition to the
result.
it('should show error UI', async () => { const dogMock = { request: { query: GET_DOG_QUERY, variables: { name: 'Buck' }, }, error: new Error('aw shucks'), }; const component = renderer.create( <MockedProvider mocks={[dogMock]} addTypename={false}> <Dog name="Buck" /> </MockedProvider>, ); await wait(0); // wait for response const tree = component.toJSON(); expect(tree.children).toContain('Error!'); });
Here, whenever the
MockedProvider receives a
GET_DOG_QUERY with matching
variables, it will return the error assigned to the
error property in the mock. This forces the component into the error state, allowing verification that it's being handled gracefully.
To simulate GraphQL errors, define
errors with an instantiated
GraphQLError object that represents your error, along with any data in your result.
const dogMock = { // ... result: { errors: [new GraphQLError('Error!')], }, };
Testing mutation components
useMutation based components are tested very similarly to
useQuery components. The only key difference is how the operation is fired. With
useQuery the query is fired when the wrapping component mounts, whereas with
useMutation the mutation is fired manually, usually after some user interaction like pressing a button.
Consider this component that calls a mutation:
export const DELETE_DOG_MUTATION = gql` mutation deleteDog($name: String!) { deleteDog(name: $name) { id name breed } } `; export function DeleteButton() { const [mutate, { loading, error, data }] = useMutation(DELETE_DOG_MUTATION); if (loading) return <p>Loading...</p>; if (error) return <p>Error!</p>; if (data) return <p>Deleted!</p>; return ( <button onClick={() => mutate({ variables: { name: 'Buck' } })}> Click me to Delete Buck! </button> ); }
Testing an initial render for this component looks identical to testing our
useQuery based component.
import DeleteButton, { DELETE_DOG_MUTATION } from './delete-dog'; it('should render without error', () => { renderer.create( <MockedProvider mocks={[]}> <DeleteButton /> </MockedProvider>, ); });
Calling the mutation is where things get interesting:
it('should render loading state initially', () => { const tree = component.toJSON(); expect(tree.children).toContain('Loading...'); });
This example looks very similar to the
useQuery based component above, but the difference comes after the rendering is completed. Since this component relies on a button to be clicked to fire a mutation, the renderer's API is used to find the button.
After a reference to the button has been obtained, a "click" on the button can be simulated by calling its
onClick handler. This will fire off the mutation, and then the rest will be tested identically to the
useQuery based component.
Note: Other test utilities like Enzyme and react-testing-library have built-in tools for finding elements and simulating events, but the concept is the same: find the button and simulate a click on it.
To test for a successful mutation after simulating the click, the fulfilled
Promise from
MockedProvider can be checked for the appropriate confirmation message, just like the
useQuery based component:
it('should delete and give visual feedback', async () => {); const tree = component.toJSON(); expect(tree.children).toContain('Deleted!'); });
The
result in a mocked mutation may be a function rather than an object. This gives you a simple way to check that a mutation has been called:
it('should delete and give visual feedback', async () => { const deleteDog = { name: 'Buck', breed: 'Poodle', id: 1 }; let deleteMutationCalled = false; const mocks = [ { request: { query: DELETE_DOG_MUTATION, variables: { name: 'Buck' }, }, result: () => { deleteMutationCalled = true; return {); expect(deleteMutationCalled).toBe(true); const tree = component.toJSON(); expect(tree.children).toContain('Deleted!'); });
For the sake of simplicity, the error case for mutations hasn't been shown here, but testing
useMutation errors is exactly the same as testing
useQuery errors: just add an
error to the mock, fire the mutation, and check the UI for error messages.
Testing UI components isn't a simple issue, but hopefully these tools will create confidence when testing components that are dependent on data.
For a working example showing how to test components, check out this project on CodeSandbox: | https://www.apollographql.com/docs/react/development-testing/testing/ | CC-MAIN-2020-24 | refinedweb | 1,898 | 51.58 |
Created on 2003-08-26 03:37 by customdesigned, last changed 2012-05-16 01:37 by r.david.murray.
The enclosed real life (inactivated) virus message
causes email.Message to fail to find the multipart
attachments. This is because the headers following
Content-Type are indented, causing email.Message to
properly append them to Content-Type. The trick is
that the boundary is quoted, and Outhouse^H^H^H^H^Hlook
apparently gets a value of 'bound' for boundary,
whereas email.Message gets the value
'"bound"\n\tX-Priority...'. email.Utils.unqoute
apparently gives up and doesn't remove any quotes.
I believe that unqoute should return just what is
between the quotes, so that '"abc" def' would be
unquoted to 'abc'. In fact, my email filtering
software () works
correctly on all kinds of screwy mail using my version
of unquote using this heuristic. I believe that header
used by the virus is invalid, so a STRICT parser should
reject it, but a tolerant parser (such as a virus
scanner would use) should use the heuristic.
Here is a brief script to show the problem (attached
file in test/virus5):
----------t.py----------
import email
msg = email.message_from_file(open('test/virus5','r'))
print msg.get_params()
---------------------
$ python2 t.py
[('multipart/mixed', ''), ('boundary',
'"bound"\n\tX-Priority: 3\n\tX-MSMail-Priority:
Normal\n\tX-Mailer: Microsoft Outlook Express
5.50.4522.1300\n\tX-MimeOLE: Produced By Microsoft
MimeOLE V5.50.4522.1300')]
Logged In: YES
user_id=142072
Here is a proposed fix for email.Util.unquote (except it
should test for a 'strict' mode flag, which is current only
in Parser):
def unquote(str):
"""Remove quotes from a string."""
if len(str) > 1:
if str.startswith('"'):
if str.endswith('"'):
str = str[1:-1]
else: # remove garbage after trailing quote
try: str = str[1:str[1:].index('"')+1]
except: return str
return str.replace('\\\\', '\\').replace('\\"', '"')
if str.startswith('<') and str.endswith('>'):
return str[1:-1]
return str
Actually, I replaced only email.Message._unquotevalue for my
application to minimize the impact. That would also be a
good place to check for a STRICT flag stored with the
message object. Perhaps the Parser should set the Message
_strict flag from its own _strict flag.
Logged In: YES
user_id=12800
Moving this to feature requests for Python 2.4. If
appropriate, the email-sig should address this in the
intended new lax parser for email 3.0 / Python 2.4. We
can't add this to the Python 2.3 (or earlier) maintenance
releases.
I'm still seeing this behaviour as of Python 2.6a0.
Barry: I take it email-sig didn't get around to discussing this?
If I understand RFC2822 3.2.2. Quoted characters (heh), unquoting must
be done in one pass, so the current replace().replace() is wrong. It
will change '\\"' to '"', but it should become '\"' when unquoted.
This seems to work:
re.sub(r'\\(.)',r'\1',s)
I haven't encountered a problem with this; I just came across it while
looking at the file Utils.py (Python 2.4, but unchanged in trunk). I
will submit a new bug if desired.
Good candidate for the email sprint. Fix suggested inline. | http://bugs.python.org/issue795081 | crawl-003 | refinedweb | 533 | 70.5 |
MOUNT_HFS(8) BSD System Manager's Manual MOUNT_HFS(8) NAME
mount_hfs -- mount an HFS/HFS+ file system SYNOPSIS
mount_hfs [-e encoding] [-u user] [-g group] [-m mask] [-o options] [-j] [-c] [-w] [-x] special directory DESCRIPTION
The mount_hfs command attaches the HFS file system residing on the device special to the global file system namespace at the location indi- cated by directory. This command is normally executed by mount(8) at boot time. The options are as follows: -e encoding (standard HFS volumes only) Specify the Macintosh encoding. The following encodings are supported: Arabic, ChineseSimp, ChineseTrad, Croatian, Cyrillic, Greek, Hebrew, Icelandic, Japanese, Korean, Roman (default), Romanian, Thai, Turkish -u user Set the owner of the files in the file system to user. The default owner is the owner of the directory on which the file system is being mounted. The user may be a user-name, or a numeric value. -g group Set the group of the files in the file system to group. The default group is the group of the directory on which the file system is being mounted. The group may be a group-name, or a numeric value. . -o Options are specified with a -o flag followed by a comma separated string of options. See the mount(8) man page for possible options and their meanings. -j Ignore the journal for this mount. -c Disable group commit for journaling. | https://www.unix.com/os-x-apple-/120106-permissions-trouble-webdav.html?s=131ef0e21f1f3f4f5b5c5076d23c0890 | CC-MAIN-2020-45 | refinedweb | 233 | 53.92 |
Opened 7 years ago
Closed 7 years ago
#1298 closed defect (fixed)
blocking "44" makes "#4" blocked
Description (last modified by coderanger)
when i fill "blocking" with "44" in #123, ticket "#4" is blocked by "#123", i'm sure this is wrong.
Index: mastertickets/util.py =================================================================== --- mastertickets/util.py (revision 2071) +++ mastertickets/util.py (working copy) @@ -10,10 +10,15 @@ db = env.get_db_cnx() cursor = db.cursor() - cursor.execute('SELECT ticket FROM ticket_custom WHERE name=%s AND (value LIKE %s OR value LIKE %s)', + cursor.execute('SELECT ticket,value FROM ticket_custom WHERE name=%s AND (value LIKE %s OR value LIKE %s)', ('blocking', '%%%s,%%'%tkt, '%%%s'%tkt)) - blocking_ids = [row[0] for row in cursor] + blocking_ids = [] + for row in cursor: + (ticket, value) = row + blocks = value.split(',') + if tkt in blocks: + blocking_ids.append(ticket) return blocking_ids def linkify_ids(env, req, ids): - return Markup(', '.join([unicode(html.A('#%s'%i, href=req.href.ticket(i), class_='%s ticket'%Ticket(env, i)['status'])) for i in ids])) \ No newline at end of file + return Markup(', '.join([unicode(html.A('#%s'%i, href=req.href.ticket(i), class_='%s ticket'%Ticket(env, i)['status'])) for i in ids])) Index: setup.py
there is nice sql function FIND_IN_SET() but is mysql specified (not sure)
Attachments (0)
Change History (10)
comment:1 Changed 7 years ago by shap
comment:2 follow-up: ↓ 8 Changed 7 years ago by anonymous
- Cc chris@… added
I'm seeing this one too.
A regexp in the SQL query to ensure the matching ticket ID was word-boundary delimited would pass, but this would require regexp in SQLITE, which appears not to be a builtin. So the filtering pass is a good workable fix.
When filterin pass is added, the SQL doesn't need the second ( LIKE OR LIKE ). I am using,
def blocked_by(env, tkt): if isinstance(tkt, Ticket): tkt = tkt.id # Allow passing a Ticket object db = env.get_db_cnx() cursor = db.cursor() cursor.execute('SELECT ticket, value FROM ticket_custom WHERE name=%s AND value LIKE %s', ('blocking', '%%%s'%tkt)) blocking_ids = [] for row in cursor: (ticket,value) = row blocks = value.split(',') if tkt in blocks: blocking_ids.append(ticket) return blocking_ids
comment:3 Changed 7 years ago by Paresh.Solanki@…
- Cc Paresh.Solanki@… added
I also have this problem:
Tickets 50, 51 and 52 are blocking 49, but when ticket 9 is viewed, it also says it is being blocked by 50, 51 and 52.
However, this only seems to affect single digit tickets as tickets 19, 29 and 39 are not affected by this.
It's not really a fix, but a quick work around would be to create the first 10 tickets as dummies and close them off before using the system for real?
comment:4 Changed 7 years ago by anonymous
That's because '9' is a substring of '49', but '19' is not. (In SQL, %9% matches 49.) You'd also have to create dummy tickets each time your ticket count went up by a factor of ten, and 10% of your tickets would always be dummy tickets. (My maths may be wrong, it's early.)
Either of the patches above should fix the problem for you.
comment:5 follow-up: ↓ 6 Changed 7 years ago by Stefan
Isn't the real problem in storing multiple values (x,y,z) in one column (blocking)? That breaks the 1NF in table design.
comment:6 in reply to: ↑ 5 Changed 7 years ago by coderanger
Isn't the real problem in storing multiple values (x,y,z) in one column (blocking)? That breaks the 1NF in table design.
Yes, however I was hoping to avoid maintaining additional invariant data about ticket links. The next iteration of this plugin (which probably won't be until after workflow is merged to trunk) will use it's own table for data storage.
comment:7 Changed 7 years ago by jhulten
comment:8 in reply to: ↑ 2 Changed 7 years ago by anonymous
- Severity changed from normal to major
comment:9 Changed 7 years ago by anonymous
you tell the case, and i fix the issue
comment:10 Changed 7 years ago by coderanger
This is now fixed in trunk.
I confirm this bug. I confirm that the patch appears to resolve it, though it does seem strange to need to do a filtering pass this way. I'm not a SQL user much, but surely there is a way to filter for an exact match on the ticket in the initial SELECT?
Anyway, as a workaround it seems to work. | http://trac-hacks.org/ticket/1298 | CC-MAIN-2014-10 | refinedweb | 756 | 63.19 |
Providers is a term in Cerebral that basically means side effects. Everything from talking to the server, browser APIs etc. should be contained in providers. This has several benefits:
You are encouraged to build an explicit API for doing side effects in your application
The providers are automatically tracked by the debugger, giving you insight of their usage
When running tests a provider is easy to mock and can be automatically mocked by the snapshot tests
In our application we want to talk to a service called JSONPlaceholder and for that we need a provider. You can choose any library to talk to the server, but we will just use the browser standard fetch.
import { App } from 'cerebral' import Devtools from 'cerebral/devtools' const API_URL = '' const app = App({ state: { title: 'My Project', posts: [], users: {}, userModal: { show: false, id: null }, isLoadingPosts: false, isLoadingUser: false, error: null }, providers: { api: { getPosts() { return fetch(`${API_URL}/posts`) .then(response => response.toJSON()) }, getUser(id) { return fetch(`${API_URL}/users/${id}`) .then((response) => response.json()) } } } }, {...})
We have now added a provider that uses the native fetch API of the browser to grab posts and users from JSONPlaceholder. Instead of creating a generic http provider we went all the way and created a specific provider for talking to JSONPlaceholder called api. The concept of a provider allows us to do this and it is highly encouraged as it will improve the readability of the application code.
We are now ready to make some stuff happen in our application! | https://cerebraljs.com/docs/introduction/providers.html | CC-MAIN-2019-26 | refinedweb | 249 | 58.01 |
For those of you who maintain kde based packages for Debian this is for you. alpha currently uses gcc2.9x by default. gcc3 is required to build the kde packages so we have to do some magic to make sure that any joe blow can apt-get source package and build it with no probs. Now, we can do this all on the buildd side but then the buildd would be the only thing that could build it...which really is alot of work for nothing if you ask me...so we do it this way to make sure. First you need to make sure you build-depend on g++-3.0 [alpha] so that we are sure g++-3.0 is installed...I don't know if it's a default package or not so it may not be needed...I put it there just in case since g++-3 is not the default compiler yet... Then in debian/rules you need to add this at the top somewhere: ARCH = $(shell dpkg-architecture -qDEB_BUILD_ARCH) ifeq ($(ARCH),alpha) COMPILER_FLAGS=CXX=g++-3.0 CC=gcc-3.0 else COMPILER_FLAGS=CXX=g++ CC=gcc endif then make your configure line look something like this: if test ! -f configure; then \ $(MAKE) -f admin/Makefile.common ;\ fi $(COMPILER_FLAGS) \ ./configure $(configkde) \ --libdir=$(kde_libdir) --includedir=$(kde_includedir) this at least works...if you have a cleaner solution let me | https://lists.debian.org/debian-kde/2001/07/msg00039.html | CC-MAIN-2015-32 | refinedweb | 233 | 85.49 |
.to(myObject, 2, {x:100, y:200});
The above code will tween
myObject.x from whatever it currently is to 100 and
myObject.y property to 200 over the course of 2 seconds. Notice the x and y values are
defined inside a generic object (between curly braces). Put as many properties there as you want..from()method to animate things into place. For example, if you have things set up on the stage in the spot where they should end up, and you just want to animate them into place, you can pass in the beginning x and/or y and/or alpha (or whatever properties you want).
Copyright.
public static var ticker:Shape
The object that dispatches a
"tick" event each time the engine updates, making it easy for
you to add your own listener(s) to run custom logic after each update (great for game developers).
Add as many listeners as you want. The basic syntax is the same for all versions (AS2, AS3, and Javascript):
Basic example (AS2, AS3, and Javascript):
//add listener TweenNano.ticker.addEventListener("tick", myFunction); function myFunction(event) { //executes on every tick after the core engine updates } //to remove the listener later... TweenNano.ticker.removeEventListener("tick", myFunction);
Due to differences in the core languages (and to maximize efficiency), the advanced syntax is slightly different for the AS3 version compared to AS2 and Javascript. The parameters beyond the first 2 in the addEventListener() method are outlined below:
Javascript and AS2
addEventListener(type, callback, scope, useParam, priority)
Parameters:
"tick"
this" refers to in your function). This can be very useful in Javascript and AS2 because scope isn't generally maintained.
true, an event object will be generated and fed to the callback each time the event occurs. The event is a generic object and has two properties:
type(always
"tick") and
targetwhich refers to the ticker instance. The default for
useParamis
falsebecause it improves performance.
Advanced example (Javascript and AS2):
//add listener that requests an event object parameter, binds scope to the current scope (this), and sets priority to 1 so that it is called before any other listeners that had a priority lower than 1... TweenNano.ticker.addEventListener("tick", myFunction, this, true, 1); function myFunction(event) { //executes on every tick after the core engine updates } //to remove the listener later... TweenNano.ticker.removeEventListener("tick", myFunction);
AS3
The AS3 version uses the standard
EventDispatcher.addEventListener() syntax which
basically allows you to define a priority and whether or not to use weak references (see Adobe's
docs for details).
Advanced example [AS3 only]:
import flash.events.Event; //add listener with weak reference (standard syntax - notice the 5th parameter is true) Twe
Provides a simple way to call a function after a set amount of time (or frames). You can optionally pass any number of parameters to the function too.
Javascript and AS2 note: - Due to the way Javascript and AS2 don't
maintain scope (what "
this" refers to, or the context) in function calls,
it can be useful to define the scope specifically. Therefore, in the Javascript and AS2
versions the 4th parameter is
scope, bumping
useFrames
back to the 5th parameter:
Twe instance that tweens backwards - you define the BEGINNING values and the current values are used as the destination values which is great for doing things like animating objects onto the screen because you can set them up initially the way you want them to look at the end of the tween and then animate in from elsewhere.
NOTE: By default,
immediateRender is
true in
from() tweens, meaning that they immediately render their starting state
regardless of any delay that is specified. You can override this behavior by passing
immediateRender:false in the
vars parameter so that it will
wait to render until the tween actually begins..from(mc, 1, {alpha:0, delay:0.5})),
but it is highly recommended that you consider using TimelineLite (or TimelineMax)
for all but the simplest sequencing tasks. It has an identical
from() method
that allows you to append tweens one-after-the-other and then control the entire sequence
as a whole. You can even have the tweens overlap as much as you want.
ParametersReturns
See also
public.killTweensOf(myFunction); because delayedCalls
are simply tweens that have their
target and
onComplete set to
the same function (as well as a
delay of course).
killTweensOf() affects tweens that haven't begun yet too. If, for example,
a tween of
myObject has a
delay of 5 seconds and
Twe.to(mc, 1, {x:100});
Each line above will tween the
"x" property of the
mc object
to a value of 100 over the coarse of 1 second. They each use a slightly different syntax,
all of which are valid. If you don't need to store a reference of the tween, just use the
static
Twe.to(mc, 1, {x:100, delay:0.5})),
but it is highly recommended that you consider using TimelineLite (or TimelineMax)
for all but the simplest sequencing tasks. It has an identical
to() method
that allows you to append tweens one-after-the-other and then control the entire sequence
as a whole. You can even have the tweens overlap as much as you want.
ParametersReturns
See also | https://greensock.com/asdocs/com/greensock/TweenNano.html | CC-MAIN-2022-40 | refinedweb | 874 | 52.6 |
The actual question is: Is there a way to get XmlWebApplicationContext to load resources using paths relative to the context location? For clarity's sake, let's say "context location" is the location ...
I'm using the Spring Form library to handle a search page in my application. Here is a snipped from my DD showing the bean configuration:
<bean name="/search.html" class="myapp.web.AccountSearchController">
...
I am trying to register an interceptor using a annotation-driven controller configuration. As far as I can tell, I've done everything correctly but when I try testing the interceptor nothing ...
Good people:
is there a way to express that my Spring Web MVC controller method should be matched either by a request handing in a ID as part of the URI path ...
I am engaged in a project where I need to show path bread crumbs to the user like
Home (This is linked to home page) >> (page name)
...
I am trying to convert my app written in spring mvc 2.5 to use the annotated style of controller.
Apparently, I am unable to get things going. Hope somebody could help ...
I am new to web programming and Spring MVC 2.5.
I am not sure about my problem is spring sprecific or web specific in general.
I have a menu.jsp and I use jsp:include ...
How can I write the freemarker templates like this:
<#import "spring.ftl" as s>
<@s.form
<@s.formInput "name"/> <!-- I want this resolved as "object.name" -->
...
My Spring MVC application is runnning on a Tomcat behind an Apache 2 acting as a proxy. I access my app directly in tomcat via an url like. I access ...
@RequestMapping(value = "/post/{postThreadId}", method = RequestMethod.GET)
@ResponseBody
public String paramTest(PostParams params) {
return params.toString();
}
("postThreadID")
setPostThreadId(int ...)
I know I can validate forms in Spring, but can I apply similar validate to URL parameters? For example, I have a method in my controller as follows:
public String edit(@PathVariable("system") String ...
I have a spring bound form (modelAttribute) which displays the user information.
The user's telephone number is displayed in a formatted manner but a requirement is that the number is saved to ...
I have an app that may run at or. I have a shared header with a search form that needs to go to
Hey, I'm pretty new in Spring MVC, and I'm learning the JSP tags and data. I have this Spring jsp where I want to fill a dropdown box with data from ...
I'm trying to get my feet wet with Spring MVC 3.0, and while I can get it to work, I can't seem to handle this particular scenario efficiently.
I have a controller ...
I have a simple web app
webapp
static
images
- a.gif
...
I have a question I am struggling with. Just a hypothetical situation here. For example I have two folders jspPages1 and jspPages2. There are jsp pages I want to keep separated. ...
i would like that my velocityengine look for templates from a designed path.
i did this :
<bean id="velocityEngine" class="org.springframework.ui.velocity.VelocityEngineFactoryBean">
<property name="velocityProperties">
<value>
resource.loader=class
...
Is there a way to use relative path, say relative to class path or /META-INF in Spring bean definition file? This is a bit different from using ServletContext to obtain such ...
ServletContext
I am very new to Spring MVC and am seeing a rather trivial behavior I don't understand.
Bellow you can find snippets to my Controller (consider I have feed.jsp and feedList.jsp). What ...
As we know, we can config an interceptor like that:
<mvc:interceptor>
<mvc:mapping
<bean class="OpenSessionInViewInterceptor">
...
To remove the language toggle from the page view(Comfirmation Page)
I found this code but it doesn't work in Spring MVC
<c:if
//Other Code
</c:if>
I need to map interceptor for all methods in annotated controller with @RequestMapping(value = "/client")
In mapping I have
<mvc:interceptor>
<mvc:mapping
<bean class="com.cci.isa.web.CIPClientHandleInterceptor" />
</mvc:interceptor>
In my application I have to compare 3 products for that in my controller I mapped request as
@RequestMapping(value = "/products/{proId1}Vs{proId2}Vs{proId3}", method = RequestMethod.GET)
public ModelAndView compareThreeProducts(@PathVariable("proId1") int id1, @PathVariable("proId2") int id2, ...
all,
I am writing a demo application to learn usage of the
org.springframework.web.servlet.mvc.support.ControllerClassNameHandlerMapping
Problem Context:
I am trying to persist a file, uploaded by the user, to file system.
Description:
Actually, I am using Spring MVC (3.0) and this is how I am trying to persist the ...
I am working on some legacy code running Spring 2.5. I need to use something similar to Spring 3's @PathVariable...anything similar available in Spring 2.5?
Posted in spring forum with no response.
I have the following code snippet (from here), which is part of my pet project.
@Controller
@RequestMapping("/browse")
public class MediaBrowser {
...
I'm trying to map a request to static resources in a spring environment. My app server is Jetty.
In web.xml, I'm mapping various url patterns to my spring servlet:
<servlet-mapping>
...
pls help me to resolve next issue! I have next config:
<mvc:resources
<mvc:resources
<mvc:resources
I've googled this problem, but no one seem so the have exactly the same problem as me.
I'm trying to setup a simple Spring MVC application. These are the relevant files:
web.xml
<?xml version="1.0" ...
URL path matching problem in Spring MVC 3.0 Hi, I have problem with URL matching in Spring MVC 3.0 in tomcat 6.0. I have mapped DispatcherServlet with following mapping in my ...
Hello I need to map interceptor for all methods in annotated controller with @RequestMapping(value = "/client") In mapping I have This interceptor called perfectly ...
Hi, how do i map the path/folder which contains the jsp files so that i can view list of all my jsp from within HTML ... I know i may need ...
no URL paths identified with Spring MVC 3.0 Hello I have an application with Spring MVC 3.0, my goal is to use annotation to the web tier, we have proceeded to ...
I validate my formular with javax.validation... JSP: Code: User* UserName is not a String, it is an Object: Class UserName: Code: public class UserName ...
spring 3 with tiles 2 mvc:annotation adds context path in front of /WEB-INF/... using spring 3.0.5 release and tiles 2.2.2 myapp-servlet.xml Code: | http://www.java2s.com/Questions_And_Answers/Spring/MVC/path.htm | CC-MAIN-2022-33 | refinedweb | 1,097 | 67.86 |
Numerical Method Inc. has the vision to promote rational investment and trading. We offer the best value continuous training and education for capital market professionals.
Our Vision
We have chosen to do wealth management, investment or trading, among many alternatives like value investing or technical analysis, in a mathematical way. The very reason is our passion for seeking the truth. Scientist Joseph Needham concluded that it was the zeal for truth that sparked and fueled the European advancement of science. Professors Chincarini and Kim argued that truth triumphs Gordon Gekko’s greed in the financial world. “Mathematics is the language with which God wrote the universe,” wrote Galileo Galilei. It is the supreme scientific truth that our civilization has achieved so far (or so we thought until the mid of 20th century). In fact, as many philosophers such as Plato believe, it is the only scientific truth. For example, the laws of natural numbers or the value of π are fundamentally true or unchangeable and do not require any specific context. Newton’s laws are not like that. They do not apply to very big or very small worlds. Therefore, we want to use exactly the same language that describes the physical Universe so amazingly well to discover the truths in the financial world. On the other hand, if mathematics did not work, what would?
During our journey to learn about mathematical wealth management, we created a simple four step process to generate a trading strategy (see Haksun’s course lecture 1). This process requires three essential skills: (1) mathematics, (2) programming and (3) creativity. Mathematics is what translates a trading idea or intuition into well-defined meaningful symbols. Starting from the assumptions, we can derive the properties of the made-concrete trading strategy. Before betting our first $1, we can compute the expected return (or P&L distribution) and the expected holding time of a trade. Programming is what translates the mathematical symbols into lines of code for trading research and execution. An effective programming skill is like an effective communication skill. We collaborate with our research tools by “talking” to them. An effective usage of the tools increases the probability of generating effective trading strategies. At the very least, it reduces in the execution systems the number of bugs that could cause millions of dollars of losses. It is easy to hire good mathematicians; you look for them in New York City. It is easy to hire good programmers; you look for them in the Silicon Valley. However, it is extremely difficult to hire someone who can come up and code up complex mathematical trading strategy. Our education program focuses on teaching these two skills.
Our Uniqueness
We differentiate ourselves from the traditional master’s degrees in financial mathematics or financial engineering. Firstly, these programs take too long a time (e.g., 3 months for a semester) to touch only the surface of the subjects. For instance, the standard topics are: options pricing, stochastic calculus and data analysis. However, you do not really need to do a degree program to learn them. Reading the right books is more efficient and effective. I (namely, Haksun here) literally picked up the knowledge from my bedtime reading: Financial Calculus: An Introduction to Derivative Pricing, Baxter and Rennie; Introduction To Stochastic Calculus With Applications, Klebaner; Statistical Analysis of Financial Data in S-Plus, Carmona. (I know they are not the standard textbooks used in universities.) The point is that the knowledge is easy and does not require a teacher. On the other hand, some topics are very useful in mathematical trading and yet very hard to self-learn them. If you would like to challenge yourself, try to study cointegration (the theory not just the R package “urca”) by reading Likelihood-Based Inference in Cointegrated Vector Autoregressive Models, Johansen, or try to solve an optimal asset allocation problem with jumps by reading Applied Stochastic Control of Jump Diffusions, Oksendal, Sulem. Our courses are designed to make these more useful yet rather inaccessible mathematics concepts easy to understand and thus accessible to you when designing your trading strategy. More importantly, the focus of our mathematics courses is on teaching mathematical thinking, namely translating a trading intuition into solid equations, rather than on formulas or mechanical computational rules.
Secondly, most graduates from these university programs cannot code professionally even though they may have gone through a year-long programming training. For example, if these students think that they can code in C++, think again after you read Scott Meyers’ books. From our interviewing experience, most junior programmers have not read the three books, hence not being able to code. Learning a (natural) language is not about learning words and grammars. Similarly, learning a programming language (Java/C#/C++/etc.) is not about learning the constructs and syntax. A professional programmer writes not only functional code that machines can read but elegant code that humans can read too. Writing elegant code is an art like painting or composing. The problem with bad spaghetti code is that there is no way to tell whether the code works or not maybe other than on a few toy examples. The consequence in trading could mean losses of millions of dollars. There are
some basic skills to elegant coding like debugging, testing, software design, design patterns, algorithm design and analysis. All these essential programming knowledge, especially the most important programming skill, hitting F7/F8 in NetBeans or F5/F6 in Eclipse (debugging), is completely absent in school curriculums. The reason is simple: even computer science students are not trained to do programming; PhDs write for their research the code that no one reads or uses; professors write papers and ask students to code for them. Our courses are designed to teach professional programming from the basic to the advanced techniques. The focus is on writing code that is solidly objected-oriented, unified/consistent, and testable.
Last but not least, the traditional university syllabuses focus on options pricing. Our personal opinion is that exotic derivative business is an evening industry since the housing bubble bursted in 2008. The money has shifted from exotic to flow or vanilla options. However, the flow business is a sales business. It does not take a lot of mathematics. How difficult is the Black-Scholes formula? OK. You may fit some volatility surface. But the money comes from customers willing to trade with you; again, it is a sales business. The desk probably makes more money by hiring a 22 year old cheerleader from USC rather than a 40 year old math PhD from MIT. The future of quantitative finance is uncertain. All banking professionals are searching for a new direction or the next gold mine. We believe, however, that there is always demand for wealth management as there are always wealthy people who are reluctant to put billions of dollars under their (big) mattess. Our courses do not teach any off-the-shelf profitable trading strategies (no one will) nor do we teach any get-rich-quick schemes (only scammers would). Our courses are designed to survey some of the sophisticated mathematical trading ideas from the academic world. From these published papers, we learn how to think mathematically, become equipped with the essential mathematics knowledge at fingertips to use and understand them, and get well versed in programming. In other words, our education objective is to train all-rounded would-be super-stars in the mathematical wealth management business. | http://numericalmethod.com/cqien/certificate-in-quantitative-investment/ | CC-MAIN-2017-26 | refinedweb | 1,245 | 54.52 |
The Groundside Blog by Duncan Mills Development Tools, Frameworks and more... 2015-09-04T15:35:17+00:00 Apache Roller Using a Translatable String as the Default Value for a Bind Variable Duncan Mills-Oracle 2014-10-15T12:26:44+00:00 2014-10-15T12:26:44+00:00 <p. </p> <p>I wondered here if it would be possible to convert the default value for the bind to be an expression rather than a literal and use groovy to derive the translated string? </p> <p>Well it took a little bit of time to come up with the correct path, but indeed it turns out to be possible with one small constraint about the keys in the bundle that have to be used.</p> <p>So to start off, here's the groovy expression I use in the default value for the bind variable: </p> <pre>source.viewObject.getProperty('TRANSLATABLE_BINDVAR')</pre> <p:</p> <pre>oracle.adf.demo.model.ValueFromBIndVO.TRANSLATABLE_BINDVAR_VALUE</pre> <p>This is the resource that you can then translate and the value of which will be used at runtime.</p> <h3>An Explanation</h3> <p> So what does this all mean? Unfortunately there is no way (that I've found) in groovy to reach directly into the ui hints of the bind variable itself If you start the expression with <i>adf.object</i>.</p> <p> </p> <p> </p> Showing Initial Selection in Your ListView Duncan Mills-Oracle 2014-10-09T07:24:16+00:00 2014-10-09T07:24:16+00:00 <p>If you use a ListView or a Table component which enables selection by the user in a master detail view you may feel that there is a slight inconsistency of behaviour. When the user first enters the screen it may, by default, look something like this (in this example a ListView to the left and the "selected" department detail to the right in a form layout):</p> <img src="" alt="ListView showing no initial selection" width="400" /> <p>Notice that the associated detail form is showing the first row in the collection as you would expect, but visually, the corresponding row in the ListView is not visually highlighted. Compare that with what happens when the user clicks on a row in the ListView to select another row:</p> <img src="" alt="ListView with a selection made by the user" width="400" /> <p>Now the selected row in the listView is highlighted with the skin's selection colour. </p> <p>So the question is, how can we show the page initially in a manner consistent with this?<br />e.g.</p> <img src="" alt="Corrected initial selection" width="400" /> <p>Well in fact the binding layer already has everything you need and it's a trivial thing to fix. All we have to do here is to set the <strong>selectedRowKeys</strong> attribute of the listView tag. So, assuming we have a listView showing departments the tag would look something like this:</p> <pre><af:listView</pre> <h3>Doing It From Scratch</h3> <p>So this is an example where the binding layer does everything for you, great! However, what wanted to do this manually? I'm showing you this because it happens to illustrate a few useful techniques and code snippets. In general I'd stick with the simple approach above though! </p> <p>Setting this up requires some simple steps and a small amount of code, so let's go.</p> <h4>Step 1: Defining the Selected Row Keys</h4> <p>The core mechanism of this solution is to pre-seed the <em>selectedRowKeys</em> property of the listView component with the row key of whichever row is current in the underlying collection. So the first step is to define somewhere to hold that row key set. So we start by defining a ViewScope managed bean (called lvState in this example) and within that, we just define a variable to hold a RowKeySet reference.</p> <p>So we start with the Java class:</p> <pre>package oracle.adf.demo; import org.apache.myfaces.trinidad.model.RowKeySet; public class ListViewState { RowKeySet _selectedRowKeys; public void setSelectedRowKeys(RowKeySet rks) { this._selectedRowKeys; = rks; } public RowKeySet getSelectedRowKeys() { return _selectedRowKeys; } }</pre> <p>And define that as a managed bean in the relevant task flow definition:</p> <pre><managed-bean> <managed-bean-name>lvState</managed-bean-name> <managed-bean-class>oracle.adf.demo.ListViewState</managed-bean-class> <managed-bean-scope>view</managed-bean-scope> </managed-bean></pre> <p>Once that's defined you can update your <af:listView> tag to reference it:</p> <pre><af:listView </pre> <h4>Step 2: Pre-Seeding the Row Key Set </h4> <p>As defined, the row key set picked up by the listView will of course be null until the user physically makes a selection - we're no better off than before. So we now have to work out a way of taking the current selection from the model / binding layer, obtaining it's key and then setting up the row key set <em>before</em> the listView is rendered. </p> <p>The simplest way of doing this is to utilise the capability of JSF 2 to simply define events to be executed as the page is being set up for rendering. In this case we use the <strong>postAddToView</strong> event inside of the listView to execute the setup code:</p> <pre><af:listView ...> <f:event <af:listItem ...></pre> <p>The event tag points at a request scope managed bean called lvEvents. This bean needs access to the lvState managed bean that we defined in the first step:</p> <pre><managed-bean> <managed-bean-name>lvEvents</managed-bean-name> <managed-bean-class>oracle.adf.demo.ListViewEventManager</managed-bean-class> <managed-bean-scope>view</managed-bean-scope> <managed-property> <property-name>state</property-name> <property-class>oracle.adf.demo.ListViewState</property-class> <value>#{lvState}</value> </managed-property> </managed-bean></pre> <p>The ListViewEventManager class should therefore have a variable to hold the state with the appropriate getter and setter:</p> <pre>public class ListViewEventManager { ListViewState _state; public void setState(ListViewState _state) { this._state = _state; } public ListViewState getState() { return _state; } ...</pre> <p>Finally, the implementation of the listener defined in the <f:event> tag needs to be added to the class:</p> <pre>public void setupListViewSelection(ComponentSystemEvent componentSystemEvent) { if (getState().getRks() == null) { BindingContainer bindings = BindingContext.getCurrent().getCurrentBindingsEntry(); DCIteratorBinding iterBind = ((DCBindingContainer)bindings).findIteratorBinding("DepartmentsView1Iterator"); Key key = iterBind.getCurrentRow().getKey(); RowKeySet rks = new RowKeySetImpl(); rks.add(Collections.singletonList(key)); getState().setRks(rks); } } </pre> <p>So now, when the page renders, if the selection row key set has not been setup due to user interaction we will go ahead and create it, seeding the rowkey of whatever row the model considers to be current.</p> Ensuring High Availability in ADF Task Flows Duncan Mills-Oracle 2014-08-15T09:23:39+00:00 2014-12-01T14:57:55+00:00 <div>Just a quick article today on ADF Controller Scopes and specifically ensuring that your application is correctly propagating state stored in PageFlow and View Scope across the cluster. This information can be found in the product doc and in Jobinesh Purushothaman's <a href="" target="_blank">excellent book</a> (Chapter 12 - Ensuring High Availability), however, more references means more eyes and fewer mistakes! </div> <h2>Some Background</h2> <div>When you store state in a managed bean scope how long does it live and where does it live? Well hopefully you already know the basic answers here, and for scopes such as Session and Request we're just dealing with very standard stuff. One thing that might be less obvious though, is how PageFlow and View Scope are handled. Now these scopes persist (generally) for more than one request, so there is obviously the possibility that you might get a fail-over between two of those requests. A Java EE server of whatever flavour doesn't know anything about these extra ADF memory scopes so it can't be automatically managing the propagation of their contents can it? Well the answer is yes and no. These "scopes" that we reference from the ADF world are ultimately stored on the Session (albeit with a managed lifetime by the framework), so you'd think that everything should be OK and no further work is going to be needed to ensure that any state in these scopes is propagated - right? Well no, not quite, it turns out that several key tasks are often missed out. So let's look at those.</div> <h2>First of All - Vanilla Session Replication</h2> <div>Assuming that WebLogic is all configured, this bit at least is all automatic right? Well no. In order to "know" that an object in the session needs to be replicated WebLogic relies on the HttpSession.setAttribute() API being used to put it onto the session. Now if you instanciate a managed bean in SessionScope through standard JSF mechanisms then this will be done and you're golden. Likewise if you grab the Faces ExternalContext and grab the Session throught that (e.g. using the getSession() API), then call the <b>setAttribute()</b> API on HttpSession, you've correctly informed WebLogic of the new object to propagate.</div> <div><br /></div> <div>You might already see though, that there is a potential problem in the case where the object stored in the session is a bean and you're changing one of <u>its</u> properties. Just calling an attribute setter on an object stored on the session will not be a sufficient trigger to have that updated object re-propagated, so the version of the object elsewhere will be stale. So when you update a bean on the session in this way, and want to ensure that the change is propagated, then re-call the setAttribute() API. </div> <div>Got it? OK, on to the ADF scopes:</div> <h2>Five Steps to Success For the ADF Scopes </h2> <div>The View and PageFlow scopes are, as I mentioned, ultimately stored on the session. Just as in the case of any other object stored in that way, changing an internal detail of those representaive objects would not trigger replication. So, we need some extra steps and of course we need to observe some key design principles whilst we're at it:</div> <div><br /></div> <div> <ol> <li>Observe the <a href="" target="_blank">UI Manager Pattern</a> and only store state in View and PageFlow scope that is actually needed and is allowed (see 2)</li> <li>As for any replicatable Session scoped bean, any bean in View or PageFlow scope must be serializable (there are audits in JDeveloper to gently remind you of this).</li> <li>Only mark for storage that which cannot be re-constructed. Again a general principle; we wish to replicate as little as possible, so use the transient marker in your beans to exclude anything that you could possibly reconstruct over on the other side (so to speak). </li> <li>In the setters of any attributes in these beans (that are not transient) call the <b>ControllerContext markScopeDirty(scope) </b>API. e.g. ControllerContext.getInstance().markScopeDirty(AdfFacesContext.getCurrentInstance().getViewScope()); This does the actual work of making sure that the server knows to refresh this state across the cluster</li> <li>Finally, set the HA flag for the controller scopes in the .adf/META-INF/adf-config file. This corresponds to the following section inside of the file:</li> </ol> </div> <blockquote style="margin: 0px 0px 0px 40px; border: none; padding: 0px;"> <pre><adf-controller-config <adf-scope-ha-support>true</adf-scope-ha-support> </adf-controller-config> </pre> </blockquote> <blockquote style="margin: 0px 0px 0px 40px; border: none; padding: 0px;"> <div>If this flag is not set, the aforementioned markScopeDirty() API will be a no-op. So this flag provides a master switch to throw when you need HA support and to avoid the cost when you do not. </div> </blockquote> <div><br /></div> <div>So if you've not done so already, take a moment to review your managed beans and check that you are really all doing this correctly. Even if you don't need to support HA today you might tomorrow... </div> Maven and ADFBC Unit Tests in 12.1.3 Duncan Mills-Oracle 2014-08-13T12:22:48+00:00 2014-08-13T12:22:48+00:00 <p>An issue that has come up recently has revolved around setting your Maven POM up in 12.1.3 such that you can run ADF BC JUnit Tests successfully both interactively in the IDE and headless through Maven maybe in your Hudson Jobs. Out of the box, the default POM that you will end up with will be missing a couple of vital bits of information and need a little extra configuration.</p> <p>Here are the steps you'll need to take:</p> <h2>Step 1: Use The Correct JUnit Dependency</h2> <p>Once you have created some unit tests JDeveloper will have added dependencies for JUnit from the JDeveloper JUnit extensions, something like this:</p> <pre><dependency> <groupId>com.oracle.adf.library</groupId> <artifactId>JUnit-4-Runtime</artifactId> <version>12.1.3-0-0</version> <type>pom</type> <scope>test</scope> </dependency> <dependency> <groupId>com.oracle.adf.library</groupId> <artifactId>JUnit-Runtime</artifactId> <version>12.1.3-0-0</version> <type>pom</type> <scope>test</scope> </dependency> </pre> <p>Delete both of these entries, if they exist, and drop in a dependency to the vanilla JUnit 4.11 library instead:</p> <pre><dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <type>jar</type> <scope>test</scope> </dependency> </pre> <p>Failing to make this change will result in the following error:</p> <pre> java.lang.NoClassDefFoundError: org/junit/runner/notification/RunListener </pre> <h2>Step 2: Configure the Surefire Plugin to Select the Correct JUnit Version</h2> <p>This is probably not needed but it ensures that Surefire is left in no doubt about what version of JUnit it shoudl be working with (in this case a version of 4.7 or higher):<> </plugin></pre> <h2>Step 3 : Identifying Your connections.xml File </h2> <p>When running outside of JDeveloper we need to set things up so that the unit tests can actually find the connection information that defines the datasource that your Application Modules are using. To do this, we need to add a configuration section to the Surefire plugin to add the location into the classpath. Add the collowing configuration into the Surefire plugin, after the <dependencies> section:</p> <pre><configuration> <additionalClasspathElements> <additionalClasspathElement> ${basedir}/../.adf </additionalClasspathElement> </additionalClasspathElements> </configuration> </pre> <p>This will ensure that the connection information can be found. If you forget this step you'll get a stack trace including the message:</p> <pre>MDS-00013: no metadata found for metadata object "/META-INF/connections.xml" </pre> <h2>Step 4 - Supply the Missing JPS Library</h2> <p>Finally we need to supply the location of one extra required library. This requirement will hopefully be resolved in the next release, but for now add it. Again this is added to the Surefire plugin configuration <additionalClasspathElements> </p> <pre><additionalClasspathElement> ${oracleHome}/oracle_common/modules/oracle.jps_12.1.3/jps-manifest.jar </additionalClasspathElement> </pre> <p>Omitting this will result in the error:</p> <pre>WARNING: No credential could be loaded for Reference = Reference Class Name: oracle.jdeveloper.db.adapter.DatabaseProvider</pre> <p>For reference here's the complete Surefire plugin definition <> <configuration> <additionalClasspathElements> <additionalClasspathElement> ${basedir}/../.adf </additionalClasspathElement> <additionalClasspathElement> ${oracleHome}/oracle_common/modules/oracle.jps_12.1.3/jps-manifest.jar </additionalClasspathElement> </additionalClasspathElements> </configuration> </plugin> </pre> <p> </p> Per-Component Instance Skinning Duncan Mills-Oracle 2014-07-25T08:33:20+00:00 2014-07-25T10:52:40+00:00 <p>A question that comes up from time to time happened to be raised <strike>twice</strike> three times this week, so I though it would be good to write the answer down. The question is this; How can I override the visual properties of a single instance of a component? </p> <p. </p> <p.</p> :</p> <pre>.saveMenu af|commandMenuItem::menu-item-icon-style { background-image:url("images/save.png"); background-repeat:no-repeat; } </pre> <p>And that's then applied to the menu item in question:</p> <pre> <af:commandmenuitem</pre> <p>That's it, pretty simple but can be useful </p> Diagram at Last Duncan Mills-Oracle 2014-06-26T21:17:01+00:00 2014-06-26T21:17:01+00:00 <p> <a href="">Data Visualization blog</a>, but I'll publish an pointer index here as well. I've just published <a href="" target="_blank">the introduction</a> to get you started and over the next week you'll have a daily injection of diagram goodness to build up your knowledge. So add the DVT blog to your RSS feed and sit back for the ride. </p> Customizing the Axis Labels in ADF Graphs Duncan Mills-Oracle 2014-02-26T18:19:43+00:00 2014-05-14T18:07:53+00:00 <p. </p> <p>In this article I want to concentrate on labeling capabilities for a graph axis, looking first of all that the declarative approaches, but then following that up with the more advanced programmatic option. </p> <h3>Managing Labels Declaratively </h3> <p>Control over the labels on the axis tick points is a good example of making the simple things declarative and the advanced things possible. For basic numeric formatting you can do everything with tags - for example formatting as currency, percentage or with a certain precision. </p> <p>This is a default (bar)graph plotting employee salary against name, notice how the Y1 Axis has defaulted to a fairly sensible representation of the salary data using 0-14K:</p> <p><img src="" /><br /></p> <p>I can change that default scaling by setting the <strong>scaling</strong> attribute in the <dvt:y1TickLabel> tag. This allows scaling at the level of none | thousand | million | billion | trillion | quadrillion (enough to show national debt then!):</p> <pre><dvt:y1TickLabel</pre> <p>Changes the graph to:</p> <p><img src="" /><br /></p> <p>We can then further change the pattern of the numbers themselves by embedding <af:convertNumber> inside of the <dvt:y1TickLabel> tag. </p> <p>e.g.</p> <pre><dvt:y1TickLabel <af:convertNumber </dvt:y1TickLabel> </pre> <p>Adds currency formatting:</p> <p><img src="" /> </p> <p> </p> <p>And using the <dvt:graphFont> we can change colors and style:</p> <pre><dvt:y1TickLabel <dvt:graphFont <af:convertNumber </dvt:y1TickLabel> </pre> <p>Giving:</p> <p><img src="" /><br /></p> <h3>Need More Control? Using the TickLabelCallback...</h3> <p <a href="" target="_blank">timeSelector</a> and can directly set the formatting, however, bear with me because I'm just illustrating the point here.</p> <p>Here's the default output with the millisecond version of the date, as you can see the millisecond value gets automatically scaled to the billions level.</p> <p><img src="" /> </p> <p>To override the default representation of the millisecond value we will need to create a java class that implements the <span style="font-family: Helvetica; orphans: 2; text-align: -webkit-auto; widows: 2;"><em>oracle.dss.graph.TickLabelCallback</em> interface. Here's the simple example I'll use in this case:</span></p> <pre; } } </pre> <p. </p> <p> </p> <p</p> <pre>public class GraphPageHandler { private UIGraph scatterPlot; public void setScatterPlot(UIGraph scatterPlot) { this.scatterPlot = scatterPlot; scatterPlot.<strong>setX1TickLabelCallback</strong>(new MSToDateFormatLabelCallback()); } </pre> <p>Now with the callback in place and the addition of:</p> <ol> <li>Using the <strong>axisMinValue</strong> attribute along with <strong> </p> <p> </p> <p> </p> <p> </p> <p> </p> Programmatic UI Scrolling in ADF Faces Duncan Mills-Oracle 2014-02-24T15:02:13+00:00 2014-03-25T12:57:50+00:00 <p. </p> <p>Declarative scrolling is really simple, you can drop a UI element such as commandButton onto the page and then nest, within that, the behavior tag <a href="" target="_blank"><af:scrollComponentIntoViewBehavior></a>. This tag is great if you have a long page and want to provide the users with a way to navigate to different sections, however, it needs to be wired to a command component and the user needs to physically invoke the operation.</p> <p. </p> <p>It turns out to be pretty simple to do, but there are different approaches depending on your version:</p> <h3>Programatic Scrolling in 11.1.2.n and 12c</h3> <p>In the newer versions of ADF this issue is addressed head on with a new API on the AdfFacesContext -> <b>scrollComponentIntoView</b>(component, focus); This API takes a UIComponent reference for the component to scroll to, and the second argument is a boolean flag to indicate if the component should take focus as well (for example if it was an input field). </p> <h3>Programatic Scrolling in 11.1.1.n</h3> <p> Prior to ADF Faces 11.1.2 the handy scrollComponentIntoView() api does not exist, however, we can acheive the same effect using some JavaScript. (<b>Note</b> if you are on a version of ADF that does support the<i> scrollComponentIntoView() </i>Java API then use that, not this.)</p> <p>As with the behavior tag, you need to do two things. </p> <p> </p> <ol> <li>The target component has to have </p> <h2>Enabling Click History </h2> <p>As I mentioned Click History is part of the framework in 12c, however, it's not switched on for every application by default. There will be a performance overhead for any kind of tracing like this so we want to be explicit about choosing to use it. </p> <p>In your application itself you need to make 2 small changes:</p> <p> </p> <ol> <li>Create a new web.xml context parameter with the name <b>oracle.adf.view.faces.context.ENABLE_ADF_EXECUTION_CONTEXT_PROVIDER</b> and set the value to <b>true</b>.</li> <li>Add an extra library reference to your /WEB-INF/ weblogic.xml file (create this if you don't have one already). Here's the reference you need:</li> </ol> <p> </p> <pre> <library-ref> <library-name>odl.clickhistory.webapp</library-name> </library-ref> </pre> <p>Assuming that you have a correctly configured Weblogic domain that has been extended with JRF this library will already be defined for you. </p> . </p> <h2>Server Side Configuration of Click History </h2> <h3>Switching Click History On or Off</h3> <p> Although individual applications enable Click History, the server also has to be told to pay attention. This is very simple to do as it is controlled by switching a particular logger (<b>oracle.clickhistory.EUM<. </p> <p>So, with WLST running:</p> <pre>wlst> connect() wlst> getLogLevel( </pre> <p>2. Use the custom handler for your root logger:</p> <pre><logger name="oracle.demo" level="FINEST" useParentHandlers='false'> <handler name='odl-handler'/> <handler name='wls-domain'/> <handler name='demo-console-handler'/> </logger> </pre> <p> </p> Setting up a standalone WebLogic 12c install for ADF Without a Database Duncan Mills-Oracle 2013-07-22T09:48:26+00:00 2013-07-30T08:30:21+00:00 <p> One change that many folks have encountered with the 12c (12.1.2) release of WebLogic and ADF concerns the setup of stand-alone WebLogic instances extended with JRF for ADF Application deployment. </p> <p>The main problem comes when creating the JRF extended domain. On page 6 of the config process screen you are confronted by the following:</p> <p><img src="" alt="Configure Database Screen" width="620" /> </p> <p> </p> <p>Note here that you need to provide information for the database that OPSS should use. </p> <p.</p> <h2>The Steps to Install Without Needing a Database </h2> <p> </p> <ol> <li. </li> <li>Start a shell / command prompt in $MW_HOME/wlserver/common/bin</li> <li>Set the environment variable QS_TEMPLATES to point to $MW_HOME/wlserver/common/templates/wls/wls_jrf.jar</li> <li>Run qs_config.sh and define the name of the domain that you want to create (e.g. add_domain) along with the usual WebLogic passwords and ports that you require. Finish the Quick Start wizard without starting the new domain. </li> <li>Now run the conventional config.sh and on the first page choose "Update an Existing Domain" rather than "Create a new domain". The new domain (e.g. adf_domain) should now be listed for selection.</li> <li>On the second screen choose the templates that you wish to apply, e.g. Oracle JRF, Oracle Enterprise Manager etc. and move on through the wizard. </li> <li>On the Database Configuration Type screen this time you will see an extra option where Embedded Database (JavaDB) is offered and pre-selected. Select that and continue with the domain setup as usual with whatever managed servers you need.</li> </ol> <p> </p> <p><img src="" width="620" /> </p> <p> </p> <p. </p> <p>The fact that the normal install for the ADF Web Runtime <u>does not</u> offer this non-database option should be taken a a strong hint as to how supported you will be running with this configuration in a production environment. Don't ask me for certification or support guidance, please contact Oracle Support for that information. </p> <h2>Further Reading</h2> <p>The use of an external RDBMS Security Store for WebLogic (the configuration that this article bypasses) is discussed in the WebLogic Documentation:</p> <p> <ul> <li><i>Administering Security for Oracle WebLogic Server</i> --> Chapter 9: <a href="">Managing the RDBMS Security Store</a> </li> </ul> </p> <p>Read that chapter for a better understanding of why the default JRF install asks you to define a database connection for this purpose and why I talk about only using the technique that I've outlined here for development purposes. </p> <p> </p> <p> </p> Adaptive Connections For ADFBC Duncan Mills-Oracle 2013-06-26T14:07:10+00:00 2013-06-26T14:07:10+00:00 <div>Some time ago I wrote an article on <a href="">Adaptive Bindings</a> showing how the pageDef for a an ADF UI does not have to be wedded to a fixed data control or collection / View Object. This article has proved pretty popular, so as a follow up I wanted to cover another "Adaptive" feature of your ADF applications, the ability to make multiple different connections from an Application Module, at runtime.</div> <div>Now, I'm sure you'll be aware that if you define your application to use a data-source rather than a hard-coded JDBC connection string, then you have the ability to change the target of that data-source after deployment to point to a different database. So that's great, but the reality of that is that this single connection is effectively fixed within the application right? Well no, this it turns out is a common misconception.</div> <div> <p>To be clear, yes a single instance of an ADF Application Module is associated with a single connection but there is nothing to stop you from creating multiple instances of the <em>same<. </p> </div> <h2>What Does it Do? </h2> <div>The ELEnvInfoProvider is a pre-existing class (the full path is <em>oracle.jbo.client.ELEnvInfoProvider</em>) which you can plug into your ApplicationModule configuration using the <strong>jbo.envinfoprovider</strong> property. Visuallty you can set this in the editor, or you can also set it directly in the bc4j.xcfg (see below for an example) .</div> <div><img width="600" src="" alt="Configuration Editor" /><br /></div> <div>Once you have plugged in this envinfoprovider, here's the fun bit, rather than defining the hard-coded name of a datasource instead you can plug in a EL expression for the connection to use. So what's the benefit of that? Well it allows you to defer the selection of a connection until the point in time that you <u>instantiate</u> the AM.</div> <div>To define the expression itself you'll need to do a couple of things:</div> <div> <ol> <li) </li> <li>)</li> </ol> </div> <pre><BC4JConfig version="11.1" xmlns=""> <AppModuleConfigBag ApplicationName="oracle.demo.model.TargetAppModule"> <AppModuleConfig DeployPlatform="LOCAL" <b> <AM-Pooling jbo. <Database jbo.locking. <Security AppModuleJndiName="oracle.demo.model.TargetAppModule"/> <b><Custom jbo.</b> </AppModuleConfig> </AppModuleConfigBag> </BC4JConfig></pre> <h3>Still Don't Quite Get It?</h3> <div>So far you might be thinking, well that's fine but what difference does it make if the connection is resolved "just in time" rather than up front and changed as required through Enterprise Manager?</div> <div>Well a trivial example would be where you have a single application deployed to your application server, but for different users you want to connect to different databases. Because, the evaluation of the connection is deferred until you first reference the AM you have a decision point that can take the user identity into account.</div> <div> .</p> <p>Hopefully you'll find this feature useful, let me know... </p> </div> <div><br /></div> UKOUG ADF Mobile Demo Duncan Mills-Oracle 2013-05-22T09:41:19+00:00 2013-05-22T09:41:19+00:00 <p>Yesterday I participated in a Special Interest Group meeting organised by the UK Oracle User Group on ADF Mobile. </p> <p:</p> <p> </p> <ul> <li><a href="">DRM010_UKOUGMobile.zip</a> </li> </ul> <div>Enjoy!</div> <p> </p> Lions and Tigers and RangeSize, Oh My! Duncan Mills-Oracle 2013-04-19T13:45:35+00:00 2013-04-19T13:45:35+00:00 <div>One of the most common mistakes I see made in ADF Business Components based applications is a failure to tune the View Objects, and specifically to tune the <strong>in Batches of</strong> parameter in the VO tuning section. This setting defaults to 1, which does not always meet the needs of the consuming UI or service interface and should generally be changed. This is a topic that I and others have <a href="" target="_blank">covered before</a>.</div> <div> <p <em>iterator RangeSize</em>.</p> <p> </p> </div> <div>The background to this was some recent detective work on an application where the time taken to display one particular screen was suspiciously long. </div> <div>The page in question had a tabular display of data, but an inspection of the VO tuning parameters showed that a reasonable Batch size of 51 was being used. What's more the <strong>As Needed</strong> switch rather than the <strong>All at Once</strong> option in the VO tuning was being used. So the developer had done totally the right things there.</div> <div>Running a SQL Trace on the page revealed an interesting thing though. Because the batch size was pretty high we'd expect that the framework would have to only do one or at most two fetches from the cursor to satisfy the needs of that table. However The TKProf output showed that in fact over 150 fetches took place retrieving over 8000 rows!</div> <div><br /></div> <div>My thought processes in diagnosing this were to look in the following places:</div> <div> <ol> <li>Are there alternative VO Instances defined on the AM where the tuning parameters are different (e.g. ALL_ROWS was specified)? We know the base definition is OK but it could be overridden. </li> <li>Any programmatic calls to change the Batch Size or fetch node in the VO?</li> <li>Any programatic calls to last() on the rowset or iterations through the rowset? </li> <li>Check for a RangeSize of -1 on the iterator definition in the pageDef files.</li> </ol> </div> <div><br /></div> <div>All of these drew a blank. The last one in particular felt like the problem but a search for the value of -1 in the pageDefs of the UI project only turned up legitimate usages of the -1 value. </div> <div><br /></div> <h3>Hold on I don't Understand This RangeSize?</h3> <div>Maybe I should take a quick step back and explain the iterator RangeSize. So, as we've seen, the tuning options in the view Object will control how often the VO has to go back to the database to get a specific number of rows. The iterator rangeSize is defined in the pageDef file for a particular page, fragment or method activity and it defines how many rows the UI layer should ask the service layer (in this case the VO) for. </div> <div>Here's a typical definition that you'll see in the pageDef:</div> <pre><iterator Binds="EmployeesView1" RangeSize="25" DataControl="HRServiceAMDataControl" id="EmployeesView1Iterator" ChangeEventPolicy="ppr"/></pre> <div><br /></div> <div>You'll see that the rangeSize here is set to 25 which just happens to be the value that is defaulted in when you drag and drop a binding into the page. However, it turns out that 25 is not the <em>default</em> value, something which has a bearing later in this investigation as we'll see.</div> <div>So in this default case when the iterator is asked for data, it in turn will ask the VO for 25 rows, and if the VO does not already have that many rows in the cache it will have to go back to the database as many times, as determined by the batch-size, as it needs to get enough data.</div> <div><br /></div> <h3>Back to the Problem page </h3> <div><br /></div> <div>As it happens the pageDef for the table displaying the problem VO was indeed the defacto default of 25, so, sad to say, it was not the obvious suspect at fault, more investigation was needed.</div> <div><br /></div> <div>At this stage the investigation splits into a couple of parallel efforts, manual code inspection, and tracing using the <a href="">ADF Logging capability</a> to try and work out what interactions were happening with the problem VO.</div> <div><br /></div> <h3>Welcome to the Internet, Please Tread with Care</h3> <div>What can we trust? Well in the world of ADF Blogs in the wild there are some great bloggers, but that does not mean that you can just copy without thinking. It turned out that one of the problems with this application was to fall foul of <em>copy-without-thinking</em> syndrome.</div> <div>The code in question seems innocent enough, it's published out there on the internet as a way of refreshing an iterator:</div> <div><br /></div> <pre>//Please, Please Don't do this! (My comment) DCIteratorBinding iterBind= (DCIteratorBinding)dc.get("<your iterator name>"); iterBind.refresh(DCIteratorBinding.RANGESIZE_UNLIMITED); </pre> <div><br /></div> <div>Two facts to discuss in relation to this code snippet:</div> <div> <ol> <li>Read the <a href="">JavaDoc for DCIteratorBinding</a> - the refresh() method is very clearly marked as <strong>Internal only</strong>, applications should not use. That's what we call "a hint" in the trade, you can choose to ignore it but don't come crying...</li> <li>Look at that parameter being passed to the refresh method <strong>DCIteratorBinding.RANGESIZE_UNLIMITED</strong> - can you guess what that does? Yes it does the same as setting the RangeSize to -1 and will cause the VO to be asked for <strong>all</strong> of the data for that query. You can see how bad that could be if the VO has the potential to return a lot of rows.</li> </ol> </div> <div>So something to put right already. </div> <h3>But Wait, There's more!</h3> <div>Although the call to refresh was a great catch and the application will be better without it, it turned out not to be the cause - darn. </div> <div>However, the parellel effort to run some more tracing found the smoking gun. The ADF Log trace showed a double execute on the iterator for the VO in question, or to be more precise, executes on two different iterators bound to the same VO from different regions on the page. </div> <div>A useful diagnostic here was then to set a breakpoint in the <strong>setRangeSize()</strong> of <em>oracle.adf.model.binding.DCIteratorBinding</em>. Doing this we could see that the first iterator execution was actually responsible for setting the RangeSize to -1 and the second to the value we where expecting for that table based on the pageDef. </div> <div>All credit to the development team I was working with here who ferreted out the actual problem, it was finally down to one of omission. </div> <div> <p>Recall I made the statement earlier about 25 being the defacto default for the RangeSize? Very true, when you create a binding, that's what the IDE puts in for you. But what's the <em>actual</em> default? Well that turns out to be -1. So if you omit the RangeSize from the iterator definition by intent, or mistake, you're going to have a potential side effect you may not have expected! That was exactly the problem in this case - no RangeSize. </p> <p>Specifically the problem was caused by a method binding in one of the bounded task flows in the page. The pageDef for this method activity included and iterator defined for the problem VO but without a RangeSize defined. </p> </div> <h3>Lessons to Learn</h3> <div> <ol> <li.</li> <li>Always tune your VO definitions or instances to the requirements of each endpoint. If this means having multiple VO definitions or multiple VO instances with different tuning parameters then do it.</li> <li!</li> <li>Use a RangeSize of -1 with caution. It has its place for iterators that serve short lists used for lookups and menus but anything else is an exception.</li> <li>Don't blindly copy everything you see in the blog-o-sphere. If you don't recognize an API call then look it up. If something says it's for internal use only, then guess what, don't use it.</li> <li>Never ever, define an iterator in your pageDef without an explicit rangeSize. If you need to see all rows, say so with the -1 value, otherwise use a positive integer.</li> </ol> </div> <div><br /></div> <div>We all go home older and wiser...</div> <div><br /></div> <div><br /></div> MySQL & ADF Business Components - Enum types Duncan Mills-Oracle 2013-02-18T09:12:30+00:00 2013-02-18T09:12:30+00:00 A quick guide to effectively mapping and representing MySQL enumeration types through ADF Business Components <p>Using ADF BC against MySQL is, I feel, a fairly under-explored area, and although there are several articles which will help you with the basic setup, things start to fade out when you get into it. I think that the key missing link is that of data type mapping so I'm intending to put together a series of articles that will explore some of these topics. </p> <p>Here, I'll start with a pretty fundamental one. If you explore the MySQL World database (or indeed the Sakila database) you'll come across enum types. (If you want to follow along here you can head over to the <a href="">MySQL Other Documentation</a> page to get the World database to play with.)</p> <p>The World database contains a table called Country:</p> <pre>+-----------------+ | Tables_in_world | +-----------------+ | City | | Country | | CountryLanguage | +-----------------+</pre> <p>And if we describe that, here is the definition:</p> <pre>+----------------+---------------------------------------------------------------------------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------------------------------------------------------------------------------------+------+-----+---------+-------+ | Code | char(3) | NO | PRI | | | | Name | char(52) | NO | | | | | Continent | enum('Asia','Europe','North America','Africa','Oceania','Antarctica','South America') | | | | | +----------------+---------------------------------------------------------------------------------------+------+-----+---------+-------+</pre> <p>For now, let's concentrate on the Continent field, defined as enum('Asia','Europe',...) . Functionally this is similar, I guess to a varchar2 column in Oracle, with an check constraint using an "in" with a static list. </p> <p>So if we generate a BC Entity object from that table, what do we get for the Continent field, (this is using SQL92 mode and the Java type-map in 11.1.2.3 of JDeveloper):</p> <pre> <Attribute Name="Continent" IsNotNull="true" ColumnName="Continent" SQLType="$none$" Domain="oracle.demo.mysql.model.eo.Enum" Type="oracle.demo.mysql.model.eo.common.Enum" ColumnType="ENUM" TableName="world.Country"/></pre> <p>So you can see here that the <strong>Type</strong> seems to be something specific to my project, and indeed, the generation of the entity has not only generated the XML for the EO but also the XML and Java class for a new domain type called <font face="courier new, courier, monospace">oracle.demo.mysql.model.eo.common.Enum</font>. This generated class implements the oracle.jbo.domain.DomainInterface, and we'll have a look at it in a bit more detail in a second. </p> <h2>Does it Work Though?</h2> <p>So what if we stop right there and just run the ADF BC tester, does the default EO/VO combo actually function? Well the answer to that is Yes (which is a relief!), however, if you change the value of the Continent field to an invalid value (i.e. not one of the values listed in the enum) then the default error message is a little sparse on the actual reason for the problem:</p> <pre>JBO-26041: Failed to post data to database during "Update": </pre> <pre> SQL Statement "UPDATE world.Country Country SET Continent=? WHERE Code=?".</pre> <p>What's more, you don't see this error until the data is actually posted to the database and it would be nice to do this kind of validation up-front. A second, related problem is down to the way that enums are handled in MySQL. If I update the field to a valid value, but use a different case from that declared in the enum() e.g. SOUTH America, then MySQL will nicely match this and convert it to "South America" in the database. However, that of course mutates the record as far as ADFBC is concerned and further updates will result in:</p> <pre>JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[... ].</pre> <h2>So how can we improve things?</h2> <p>So first of all, it's trivial to fix the locking problem (JBO-25014 error). For that we just need to update the properties of the attribute in the EO to refresh after insert and update:</p> <p><img width="600" src="" alt="Setting the refresh properties for Continents" /> </p> <p>Next, how can we improve both the timing and the error message provided by the validation? Well in this case we need to fill in some blanks in the generated domain class. If you have a look at it you'll see that there is a validate() method stub generated into the class. </p> <pre> /** * Implements domain validation logic and throws a JboException on error. */ protected void validate() { // ### Implement custom domain validation logic here. ### } </pre> <p>Not surprisingly all we need to do there is implement some logic in that and throw a JboException with a more informative error message. This will ensure that not only is the message better, but also validation will happen as soon as the field value is set in the EO, rather than being deferred and waiting for the database post.</p> <p>Before you implement the method though I'd recommend that you do some refactoring to change both the name of the Domain and Java class to something a little more specific than "Enum". To do this, simple select the top level Enum node in the navigator and use <strong>Refactor | Rename</strong> from the menu bar. This will rename the Java class, the XML file and of course fix up the Country EO itself correctly. I've renamed mine to <i>CountryContinentsEnum</i>.</p> <h3>Implement the Validation Logic </h3> <p>The logic we need in this case is to inspect the <strong>mData</strong> value inside of the validate() method and compare it to a list of the valid Continents (as defined by the MySQL column definition). The twist here is to remember that MySQL will in fact be happy with any case combination for the Enum value, so we need to carry out a case insensitive compare. In my example this consists of three steps:</p> <h4>1) Define the list of Valid Values</h4> <p>I actually use a HashMap here where the key of the map is the uppercase version of the Continent name and the value is the lower case value. For the validation below we only need the uppercase version for the actual check, but it's nice to have the mixed case version for any exception messages that we raise.</p> <p>This list is stored in a static class variable:</p> <pre>public static final Map<String, String> CONTINENT_ENUM = new HashMap<String, String>(7);</pre> <h4>2) Populate the Valid Values Map</h4> <p>I populate the Map in a static block in the class. These values can be shared across all instances.</p> <pre>static { CONTINENT_ENUM.put("ASIA", "Asia"); CONTINENT_ENUM.put("EUROPE", "Europe"); CONTINENT_ENUM.put("NORTH AMERICA", "North America"); CONTINENT_ENUM.put("AFRICA", "Africa"); CONTINENT_ENUM.put("OCEANIA", "Oceania"); CONTINENT_ENUM.put("ANTARCTICA", "Antarctica"); CONTINENT_ENUM.put("SOUTH AMERICA", "South America"); }</pre> <p><strong>Hint</strong>: Why not map the To-Uppercase keyboard shortcut in <strong>Tools | Preferences,</strong> it's not mapped by default but the function is there in the IDE! That will make it easier to create the uppercase version of the value. </p> <h4>3) Implement the validate() Method</h4> <p>Finally the actual validate method. Most of this is the error handling as the actual validation check itself is a simple (and quick) containsKey() call on the map.</p> <p> </p> <pre>protected void validate() { // MySQL will convert the value to the expected case if (!CONTINENT_ENUM.containsKey(mData.toUpperCase())) { //Construct a nice error to send to the client boolean firstLoop = true; StringBuilder errorMsg = new StringBuilder("Incorrect continent value suppied. Pick one from "); for (Map.Entry<String, String> entry : CONTINENT_ENUM.entrySet() ){ if (!firstLoop){ errorMsg.append(", "); } errorMsg.append(entry.getValue()); firstLoop = false; } throw new JboException(errorMsg.toString()); } }</pre> <p>Now to test that, here's the result when we put an invalid value into the BC Tester:</p> <p> <img width="600" src="" /></p> <p>Great, but look what happens when we have a Web UI bound to the same component and attempt to put an invalid value in:</p> <p><img src="" /> </p> <p>Whoops! So here's a slight problem, when run through the binding layer we're not seeing that well crafted error message, instead there is some data conversion error. What's more if I change to a valid value such as "Asia" I get the same error, so it's not my validate() method that's barfing here. (in fact if you remove the validate method all together then you'll still get the error).</p> <h4>What to Do?</h4> <p>What's happening here is that the binding layer itself is sensibly doing a check on datatype and is not seeing how to do the conversion (even though ADF BC itself will handle an incoming string). So we need to give JSF a little help and specify a Converter. A basic converter in JSF is very simple, it just has to implement two methods getAsString() and getAsObject() which convert from the Object Type to a String for HTTP to use and visa-versa.</p> <p>Here's the simple implementation in this case:</p> <pre>package oracle.demo.mysql.view; import javax.faces.application.FacesMessage; import javax.faces.component.UIComponent; import javax.faces.context.FacesContext; import javax.faces.convert.Converter; import javax.faces.convert.ConverterException; import oracle.adf.share.logging.ADFLogger; import oracle.demo.mysql.model.eo.common.CountryContinentsEnum; import oracle.jbo.JboException; /** * Generic converter for MySQL the Continent Enum type that is just an enummeration of Strings * This simply wraps the real conversion which will happen at the ADF BC layer */ public class ContinentEnumConverter implements Converter { private static ADFLogger _logger = ADFLogger.createADFLogger(ContinentEnumConverter.class); public ContinentEnumConverter() { super(); } /** * Standard converter class method that converts the String form of the object sent with the HTTP request * into the real object type that needs to be handed off to the model * @param facesContext * @param uIComponent * @param stringValue * @return CountryContinentsEnum created from stringValue */ @Override public Object getAsObject(FacesContext facesContext, UIComponent uIComponent, String stringValue) { CountryContinentsEnum continent = null; try { continent = new CountryContinentsEnum(stringValue); } catch (JboException jboex){ //If the validate method is failed then this is the exception that we will get reportConversionProblem(jboex.getMessage(),false); } catch (Exception ex) { //Others: just in case reportConversionProblem("Error: Can't create instance of Continent " + ex,true); } return continent; } /** * Standard converter method that converts from the Object type to a String form suitable for HTTP * @param facesContext * @param uIComponent * @param objectValue * @return String value of objectValue */ @Override public String getAsString(FacesContext facesContext, UIComponent uIComponent, Object objectValue) { return objectValue.toString(); } /* * Just queues up the conversion problem and optionally logs it as well */ private void reportConversionProblem(String message, boolean logit){ if (logit){ _logger.severe("Error: " + message); } FacesMessage fmsg = new FacesMessage(null,message); fmsg.setSeverity(FacesMessage.SEVERITY_ERROR); throw new ConverterException(fmsg); } } </pre> <p>Once you have a converter class it then has to be associated with the components that it will need to operate on in some way. There are two options, firstly you can associate the converter with a specific class so that whenever JSF encounters a conversion using that class it will use the associated converter for that, or secondly you can create the converter with a specific ID and then associate that converter ID with individual components using the <f:converter tag. </p> <p>In this case I'm using the former method so that I won't have to make any changes to the page, I just need this entry in my faces-config.xml file:</p> <p> </p> <pre><converter> <converter-for-class> oracle.demo.mysql.model.eo.common.CountryContinentsEnum </converter-for-class> <converter-class> oracle.demo.mysql.view.ContinentEnumConverter </converter-class> </converter></pre> <p> </p> <p> </p> <p>Now when we induce an error, here's the result:</p> <p><img src="" /> </p> <h2>An Easier Way?</h2> <p>In this example, I've taken a very strictly typed approach to MySQL Enums where we're able to take advantage of the ADF BC domain capabilities to handle the type conversion. However, the fact remains that enums in MySQL are always lists of Strings, so another valid approach to the problem is to simply change the generated mapping from the enum type to String and then add on a validation in the ADF BC layer to restrict the valid values that can be squirted into MySQL. You'll have to handle the fact that the case does not have to match exactly as part of this validation though.</p> <p>My preference though would be to stick to the explicit type conversion method I'm using here, but pair that with a list of values definition on the attribute so that the UI always presents the enumeration as a list in any case.</p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> Refresh Problems using Adaptive Bindings Duncan Mills-Oracle 2013-01-18T10:23:49+00:00 2013-01-18T10:23:50+00:00 <p>In a previous article (<a href="" target="_blank">Towards Ultra-Reusability for ADF - Adaptive Bindings</a>) I discussed how ADF Data Binding can be a lot more flexible that you would think due to the neat trick of being able to use expression language within the PageDef file to create bind sources that can be switched at runtime.</p> <p>As it happens my good buddy<a href="" target="_blank"> Frank Nimphius</a>. </p> <p>No worries though, given that Frank is a clever chap he worked out the correct way to address this which is to simply call clearForRecreate() on the iterator binding.</p> <pre> BindingContext bctx = BindingContext.getCurrent(); BindingContainer bindings = bctx.getCurrentBindingsEntry(); DCIteratorBinding iter = (DCIteratorBinding) bindings.get("TableSourceIterator"); iter.clearForRecreate(); </pre> <p>Thanks Frank! </p> Towards Ultra-Reusability for ADF - Adaptive Bindings Duncan Mills-Oracle 2012-11-17T08:08:56+00:00 2013-09-26T18:21:51+00:00 <p>The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications.</p> <p, "<em>adaptive bindings</em>".</p> <p? </p> <p>Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and:</p> <p> </p> <ol> <li>I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow </li> <li>If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle</li> </ol> <p> </p> <p> Ah, joy! I reply, no need to panic, you can just use adaptive bindings.</p> <h3>Defining an Adaptive Binding </h3> <p>It's easiest to explain with a simple before and after use case. Here's a basic pageDef definition for our familiar Departments table. </p> <pre>> </pre> <p>Here's the adaptive version:</p> <pre><executables> <iterator Binds="<strong>${pageFlowScope.voName}</strong>" DataControl="HRAppModuleDataControl" RangeSize="25" id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings> </pre> <p>You'll notice three changes here. </p> <p> </p> <ol> <li>Most importantly, you'll see that the hard-coded View Object name that formally populated the iterator <strong>Binds </strong>attribute is gone and has been replaced by an expression (<strong>${pageFlowScope.voName}</strong>). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table!</li> <li>I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable</li> <li>The tree binding itself has simplified and the node definition is now empty. Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to <a href="Eugene%20Fedorenko" target="_blank">Eugene Fedorenko</a> at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld this year)</li> </ol> <p> </p> <h3>Using the adaptive binding in the UI</h3> <p:</p> <pre>"/></pre> <pre> </af:column> </af:forEach> </af:table></pre> <p...) </p> <h3>One Final Twist</h3> <p> To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change?</p> <pre><iterator Binds="{pageFlowScope.voName}" DataControl="${pageFlowScope.dataControlName}" RangeSize="25" id="TableSourceIterator"/> </pre> <p>Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the <b>data control</b>. <i>types</i> of data controls, not just one flavour. Enjoy!</p> <h2>Update</h2> <p>Read <a href="" target="_blank">this post</a> as well on overcoming possible refresh problems when changing the source on a single page. </p> <h2>Further update</h2> <p>Check out this <a href="" target="_blank">article from Luc Bors</a> on using similar ideas with Query Components / View Criteria. </p> <p> </p> <p> </p> ADF Logging In Deployed Apps Duncan Mills-Oracle 2012-10-18T09:39:24+00:00 2012-10-18T09:39:24+00:00 <p>Harking back to my series on <a href="" target="_blank" title="Logging article index">using the ADF logger</a> and the related <a href="" target="_blank" title="Video Tutorial on the logger">ADF Insider Video</a>,. </p> <p>Before we start I'm assuming that you have EM up and running, in my case I have a small test install of Fusion Middleware Patchset 5 with an ADF application deployed to a managed server.</p> <h3>Step 1 - Select your Application</h3> <p>In the EM navigator select the app you're interested in:</p> <p><img src="" /><br /></p> <p>At this point you can actually bring up the context ( right mouse click) menu to jump to the logging, but let's do it another way. </p> <h3>Step 2 - Open the Application Deployment Menu</h3> <p>At the top of the screen, underneath the application name, you'll find a drop down menu which will take you to the options to view log messages and configure logging, thus:</p> <p><img src="" /><br /></p> <h3>Step 3 - Set your Logging Levels </h3> <p>Just like the log configuration within JDeveloper, we can set up transient or permanent (not recommended!) loggers here. </p> <p><img src="" /><br /></p> <p>In this case I've filtered the class list down to just <em>oracle.demo</em>, and set the log level to config. You can now go away and do stuff in the app to generate log entries.</p> <h3>Step 4 - View the Output </h3> <p>Again from the Application Deployment menu we can jump to the log viewer screen and, as I have here, start to filter down the logging output to the stuff you're interested in. </p> <p><img src="" /><br /></p> <p>In this case I've filtered by module name. You'll notice here that you can again look at related log messages. </p> <p>Importantly, you'll also see the name of the log file that holds this message, so it you'd rather analyse the log in more detail offline, through the ODL log analyser in JDeveloper, then you can see which log to download.</p> <p> </p> ADF - Now with Robots! Duncan Mills-Oracle 2012-10-05T22:51:49+00:00 2012-10-05T22:51:49+00:00 <p>I mentioned this briefly in a tweet the other day, just before the full rush of OOW really kicked off, so I though it was worth re-visiting. Check out this video, and then read on:<br /></p> <p> <iframe width="560" height="315" src="" frameborder="0"></iframe> </p> <p><br /. </p> <p><img src="" alt="SL150 GUI Screen Shot" />. </p> <p!</p> <p>This is a project that I've been personally involved in and I'm pumped to see such a good result and, I have to say, those hardware guys are great to work with (and have way better toys on their desks than we do).</p> <p>More info in the SL150 (should you feel the urge to own one) is <a href="" target="_blank">here</a>. </p> forEach and Facelets - a bugfarm just waiting for harvest Duncan Mills-Oracle 2012-09-04T11:06:18+00:00 2013-07-17T13:57:59+00:00 <p>An issue that I've encountered before and saw again today seems worthy of a little write-up. It's all to do with a subtle yet highly important difference in behaviour between JSF 2 running with JSP and running on Facelets (.jsf pages). The incident I saw today can be seen as a report on the <a href="" target="_blank">ADF EMG</a> bugzilla (<a href="" target="_blank">Issue 53</a>) and in a <a href="" target="_blank">blog posting</a> by Ulrich Gerkmann-Bartels who reported the issue to the EMG. Ulrich's issue nicely shows how tricky this particular gochya can be. On the surface, the problem is squarely the fault of MDS but underneath MDS is, in fact, innocent.</p> <p>To summarize the problem in a simpler testcase than Ulrich's example, here's a simple fragment of code:</p> <pre><af:forEach <af:commandLink </af:forEach></pre> <p>Looks innocent enough right? We see a bunch of links printed out, great. </p> <p>The issue here though is the <strong>id</strong> attribute. Logically you can kind of see the problem. The forEach loop is creating (presumably) multiple instances of the commandLink, but only one <em>id</em> is specified - cl1. We know that IDs have to be unique within a JSF component tree, so that must be a bad thing? The problem is that JSF under JSP implements some hacks when the component tree is generated to transparently fix this problem for you. Behind the scenes it ensures that each instance really does have a unique id. Really nice of it to do so, thank you very much.</p> <p>However, (you could see this coming), the same is not true when running with Facelets (this is under 11.1.2.n) in that case, what you put for the <strong>id</strong> is what you get, and JSF does not mess around in the background for you. So you end up with a component tree that contains duplicate ids which are only created at runtime. So subtle chaos can ensue. The symptoms are wide and varied, from something pretty obscure such as the combination Ulrich uncovered, to something as frustrating as your ActionListener just not being triggered. And yes I've wasted hours on just such an issue. </p> <h3>The Solution </h3> <p>Once you're aware of this one it's really simple to fix it, there are two options:</p> <p> </p> <ol> <li>Remove the id attribute on components that will cause some kind of submission within the forEach loop altogether and let JSF do the right thing in generating them. Then you'll be assured of uniqueness.</li> <li>Use the <strong>var </strong>attribute of the loop to generate a unique id for each child instance. for example in the above case: <strong><af:commandLinkJAVASERVERFACES-1527</a></p> <p> </p> ADF and EBS Applications Duncan Mills-Oracle 2012-07-09T08:30:04+00:00 2012-07-09T08:30:04+00:00 A blog entry that may be of interest to those of using building ADF apps that, in some way, need to integrate with Oracle E-Business Suite. Head over to Steven Chan's Applications Technology Blog: <a href="">Building Extensions Using E-Business Suite SDK for Java</a> New Sample Demonstrating the Traversing of Tree Bindings Duncan Mills-Oracle 2012-07-03T09:12:44+00:00 2012-07-06T07:05:46+00:00 <p. </p> <p>Putting this together you can represent the data encoded into a tree binding in all sorts of ways.</p> <p>As an example I’ve put together a very simple sample based on the HT schema and uploaded it to the ADF Sample project. It produces this UI:</p> <p><img src="" alt="Example output from this technique" /> </p> <p>The important code is shown here for a Region -> Country -> Location Hierachy:</p> <pre>> </pre> <p>You can download the entire sample from <a href="">here</a>:</p> The UIManager Pattern Duncan Mills-Oracle 2012-04-05T08:17:46+00:00 2014-01-27T09:55:41+00:00 <p>One of the most common mistakes that I see when reviewing ADF application code, is the sin of storing UI component references, most commonly things like table or tree components in Session or PageFlow scope. The reasons why this is bad are simple; firstly, these UI object references are not serializable so would not survive a session migration between servers and secondly there is no guarantee that the framework will re-use the same component tree from request to request, although in practice it generally does do so.</p> <p>So there danger here is, that at best you end up with an NPE after you session has migrated, and at worse, you end up pinning old generations of the component tree happily eating up your precious memory. So that's clear, we should never. ever, be storing references to components anywhere other than request scope (or maybe backing bean scope). So double check the scope of those <em>binding </em>attributes that map component references into a managed bean in your applications. </p> <h3>Why is it Such a Common Mistake? </h3> <p>At this point I want to examine why there is this urge to hold onto these references anyway? After all, JSF will obligingly populate your backing beans with the fresh and correct reference when needed. </p> <p>In most cases, it seems that the rational is down to a lack of distinction within the application between what is data and what is presentation. I think perhaps, a cause of this is the logical separation between business data behind the ADF data binding (#{bindings}) façade and the UI components themselves. Developers tend to think, OK this is my data layer behind the bindings object and everything else is just UI. Of course that's not the case. The UI layer itself will have state which is intrinsically linked to the UI presentation rather than the business model, but at the same time should not be tighly bound to a specific instance of any single UI component. So here's the problem. I think developers try and use the UI components as state-holders for this kind of data, rather than using them to represent that state. An example of this might be something like the selection state of a tabset (panelTabbed), you might be interested in knowing what the currently disclosed tab is. The temptation that leads to the component reference sin is to go and ask the tabset what the selection is. That of course is fine in context - e.g. a handler within the same request scoped bean that's got the binding to the tabset. However, it leads to problems when you subsequently want the same information outside of the immediate scope. The simple solution seems to be to chuck that component reference into session scope and then you can simply re-check in the same way, leading of course to this mistake.</p> <h3>Turn it on its Head </h3> <p>So the correct solution to this is to turn the problem on its head. If you are going to be interested in the value or state of some component outside of the immediate request context then it becomes persistent state (persistent in the sense that it extends beyond the lifespan of a single request). So you need to externalize that state outside of the component and have the component reference and manipulate that state as needed rather than owning it. This is what I call the UIManager pattern. </p> <h3>Defining the Pattern</h3> <p>The UIManager pattern really is very simple. The premise is that every application should define a session scoped managed bean, appropriately named UIManger, which is specifically responsible for holding this persistent UI component related state. The actual makeup of the UIManger class varies depending on a needs of the application and the amount of state that needs to be stored. Generally I'll start off with a Map in which individual flags can be created as required, although you could opt for a more formal set of typed member variables with getters and setters, or indeed a mix. This UIManager class is defined as a session scoped managed bean (#{uiManager}) in the faces-config.xml. </p> <p>The pattern is to then inject this instance of the class into any other managed bean (usually request scope) that needs it using a managed property. So typically you'll have something like this:</p> <pre> <managed-bean> <managed-bean-name>uiManager</managed-bean-name> <managed-bean-class>oracle.demo.view.state.UIManager</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> </pre> <p>When is then injected into any backing bean that needs it: </p> <pre> <managed-bean> <managed-bean-name>mainPageBB</managed-bean-name> <managed-bean-class>oracle.demo.view.MainBacking</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> <managed-property> <property-name>uiManager</property-name> <property-class>oracle.demo.view.state.UIManager</property-class> <value>#{uiManager}</value> </managed-property> </managed-bean></pre> <p>In this case the backing bean in question needs a member variable to hold and reference the UIManager:</p> <pre>private UIManager _uiManager; </pre> <p>Which should be exposed via a getter and setter pair with names that match the managed property name (e.g. setUiManager(UIManager _uiManager), getUiManager()). </p> <p>This will then give your code within the backing bean full access to the UI state. </p> <p>UI components in the page can, of course, directly reference the uiManager bean in their properties, for example, going back to the tab-set example you might have something like this:</p> <p> </p> <pre><af:paneltabbed> <af:showDetailItem ... </af:showDetailItem> <af:showDetailItem</pre> <pre> ... </af:showDetailItem> ... </af:panelTabbed></pre> <p>Where in this case the <em>settings </em>member within the UI Manger is a Map which contains a Map of Booleans for each tab under the MAIN_TABSET_STATE key. (Just an example you could choose to store just an identifier for the selected tab or whatever, how you choose to store the state within UI Manger is up to you.)</p> <h3>Get into the Habit</h3> <p>So we can see that the UIManager pattern is not great strain to implement for an application and can even be retrofitted to an existing application with ease. The point is, however, that you should always take this approach rather than committing the sin of persistent component references which will bite you in the future or shotgun scattered UI flags on the session which are hard to maintain. If you take the approach of always accessing <u>all</u> UI state via the uiManager, or perhaps a pageScope focused variant of it, you'll find your applications much easier to understand and maintain. Do it today!</p> <h3>More Information</h3> <p>Another interesting article relating to this topic has been written by Steven Davelaar subsequent to the original publication of this post. This article is well worth checking out more more information on this area and some best practice around component references.</p> <p> </p> <ul> <li><a href="">Rules and Best Practices for JSF Component Binding in ADF</a> </li> </ul> <p> </p> uncommittedDataWarning - It's a Matter of Timing Duncan Mills-Oracle 2012-03-01T10:02:40+00:00 2012-03-01T10:02:40+00:00 <p>An interesting nugget came across my desk yesterday that I though was worth sharing. What's wrong with this picture:</p> <p><img src="" alt="Task flow Diagram" /> </p> <p> </p> <p>Absolutely nothing right? That's exactly the kind of taskflow we would expect to see where the user can, from the listEmployee screen, either create a new empty record by following the <i>new </i>navigation rule or edit the current row using the <i>edit</i> rule. The problem in this particular case was, however, that the employeeList screen has </pre> <p>So what happens? Every time the <b>new</b> navigation rule is followed to navigate to the editEmployee screen the uncommitted data warning pops up. So what's going on here? The listEmployee screen is read only in this case how could it have marked the data control as dirty? </p> <p <i>before</i> the navigation away from the page, so at the point in time that the <b>continue</b> navigation is invoked the DataControl will be dirty after all and the uncommittedDataWarning check will fail, popping up the dialog. </p> <p>The solution in this case is simple (assuming that you <b>have </b>to keep the </pre> <p>Then the command item that triggers the navigation to the edit page sets that flag (or does not set it if you want to do an edit rather than an insert) :</p> <pre><af:commandButton <af:setPropertyListener </af:commandButton></pre> <p>So there you go - there is life in the old techniques yet! </p> Break Group Formatting in a Table - Part 2 Duncan Mills-Oracle 2012-01-21T17:26:02+00:00 2012-01-21T17:26:02+00:00 <p>In <a href="">part 1 of this series</a> I discussed the use of the EL map mechanism as a way that we could fake a function call to manage the logic of the Break Group value display. In this article I wanted to discuss an alternative which is to actually extend Expression Language and add a custom function to do the job in a "proper" way. </p> <p>In doing this I've taken the opportunity to make the code more flexible and generic than the Map example by allowing breaking at multiple levels and breaking in multiple tables in the same view. All in all this is a much cleaner and really simpler solution than that covered in Part 1.</p> <h3>Defining the Function</h3> <p>The EL function that I'm using here is defined in a static method within a class in the project. It is of course something that could be easlily bundled up into a re-usable JAR but in this case I've done everthing in the ViewController project. Note that I'm using 11.1.2 here and a facelets based page, but this process is essentially the same with 11.1.1.n and JSP(X) based pages.</p> <p>Here's the function class. It just defines the single static method to do the check. Notice that this function returns a Boolean to indicate a match with the previous value, but as arguments it takes the value to compare and a key. This key is an arbitrary String which will allow us to manage several parallel compares. For example if we wanted two break group tables on the same page we would use a unique key value for each. Likewise if you wanted to have more than one break column in a single table you just need to give each a unique identified for this value. Secondly I've genericized the code to use Object as the type of the compare value so you should be able to break on any attribute type. </p> <pre>package oracle.demo.breakgroup.el; import java.util.HashMap; import java.util.Map; import oracle.adf.view.rich.context.AdfFacesContext; public final class BreakGroupFunctions { //Key used to store the map we use in ViewScope private final static String public static Boolean compareWithLastValue(String compareKey, Object compareValue ){ Boolean repeatedValue = false; AdfFacesContext actx = AdfFacesContext.getCurrentInstance(); Map viewScopeMap = actx.getViewScope(); Map bgs; if (viewScopeMap.containsKey(BREAK_GROUP_STORE)){ bgs = (Map)viewScopeMap.get(BREAK_GROUP_STORE); } else{ // First access so create and populate the store map bgs = new HashMap(1); viewScopeMap.put(BREAK_GROUP_STORE, bgs); } if (bgs.containsKey(compareKey)){ Object compareLast = bgs.get(compareKey); if (compareLast != null && compareLast.equals(compareValue) ){ repeatedValue = true; } else{ // new value, so reset what we'e got stored bgs.put(compareKey,compareValue); } } else { // This must be a new key so store it away bgs.put(compareKey, compareValue); } return repeatedValue; } }</span></pre> <h3>Define the TagLib</h3> <p>JDeveloper has a handy editor to create the Tag library (<em>New </em>-> <em>Web Tier</em> -> <em>JSF / Facelets</em> -> <em>Facelets Tag Library</em>). This will allow you to define either custom components or functions, as we are doing here. You basically need just 3 bits of information to define your function in the taglib, the class you created, the function signature in the class and an arbitary namespace string to uniquely identify the taglib.</p> <p>The resulting XML is pretty simple. Notice the <namespace> attribute which will be referenced again in the <f:view> tag of the page.</p> <pre><?xml version = '1.0' encoding = 'windows-1252'?> <facelet-taglib xmlns: <namespace>oracle.com/adf/demo/breakgroup.taglib</namespace> <function> <description>Compares the passed-in value with the last one that was passed in for this key and returns true to indicate a match or false in any other case.</description> <function-name>compareWithLast</function-name> <function-class>oracle.demo.breakgroup.el.BreakGroupFunctions</function-class> <function-signature>java.lang.Boolean compareWithLastValue( java.lang.String, java.lang.Object)</function-signature> </function> </facelet-taglib></pre> <h3>Wire this into the Page</h3> <p>So finally we need to be able to consume this new function in the page. The first step is to create the namespace definition in the <f:view ...> tag, in this case I've used "bgf", thus:</p> <pre><f:view xmlns:</pre> <p>Then we can use that function in much the same way as the map reference we used before, but now we can pass two arguments - the value to compare and the compare key:</p> <pre><af:column <af:outputText </af:column> </pre> <p>So there you have it. As you can see it's really simple to extend Expression Language for yourself and in doing so opens up a large set of possibilities. Before I go though I'll point you off to a couple of other resources in this subject area which will provide a little more. </p> <p> </p> <ul> <li>Frank Nimphius's article on <a href="">Using JSTL Functions in Faces</a>.</li> <li>Lucas Jellema on using <a href="">Custom EL Expression in JSF 1.x</a></li> </ul> <p> </p> Axis Formatting in DVT Gauge Duncan Mills-Oracle 2012-01-11T04:16:03+00:00 2012-01-11T04:16:03+00:00 <p>Further to my <a href="" target="_blank" title="Enahance Gauge Control">last article</a> on gauge style UIs, I though that I'd write up a little more on the core gauge control itself, focusing this time on formatting the axis of the gauge. </p> <p>In our original example we just used the defaults for the axis which produced a gauge like this (I've removed the reference marker line that I was focusing on last time). </p> <p><img src="" alt="Defaul Gauge Axis" /></p> <p.</p> <p <em>content</em> attribute, this takes one or more string constants as a space or comma separated list so you can combine several options together (see example below). Here are the constants:</p> <table border="1" cellspacing="1" cellpadding="1"> <tbody> <tr> <td width="30%"><strong>TC_NONE</strong></td> <td width="70%">No label or tick-mark is displayed</td> </tr> <tr> <td style="width: 30%; "><strong>TC_MIN_MAX</strong></td> <td style="width: 70%; ">Display label or tick-mark at either end of the gauge</td> </tr> <tr> <td style="width: 30%; "><strong>TC_METRIC</strong></td> <td style="width: 70%; ">Display label or tick-mark at the data value</td> </tr> <tr> <td style="width: 30%; "><strong>TC_THRESHOLD </strong></td> <td style="width: 70%; ">Display label or tick-mark at each threshold boundary</td> </tr> <tr> <td style="width: 50%; "><strong>TC_INCREMENTS</strong></td> <td style="width: 50%; ">Displays labels or tick-marks at regular intervals as defined by the <dvt:tickMark> <em>majorIncrement</em> property. Note that for the label you cannot mix this attribute with the TC_THRESHOLD or TC_METRIC </td> </tr> <tr> <td style="width: 30%; "><strong>TC_MAJOR_TICK</strong></td> <td style="width: 70%; "> <p>Displays labels or tick-marks at min/max and the majorIncrement value - effectively the same as "TC_INCREMENTS TC_MIN_MAX". Note that for the label you cannot mix this attribute with the TC_THRESHOLD or TC_METRIC </p> </td> </tr> </tbody> </table> <p>We can combine those in different ways, for example to produce this example where we mark the min, max and current values in terms of text but only show a tick-mark at the value (TC_METRIC):</p> <p><img src="" alt="gauge with just the metric tick" /> </p> <p>The code for this one is:</p> <pre><dvt:gauge ....> <dvt:tickLabel <dvt:tickMark ...</pre> <p>Or this where we show a more regular axis, similar to the one that we produced when emulating the gauge with the horizontalBar chart:</p> <p><img src="" alt="Gauge with regular increments" /> </p> <p> </p> <p>And the code:</p> <pre><dvt:gauge ....> <dvt:tickLabel <dvt:tickMark ...</pre> <p>.</p> <p:</p> <p><img src="" alt="usethresholdColor green gauge" /> </p> <p>And a value of 15 which is down in the red zone will render like this:</p> <p><img src="" alt="useThresholdFillColor red gauge" /> </p> <p>The code is extremely simple for this one, just set the <i>useThresholdFillColor</i> attribute to true on the <dvt:gaugePlotArea> tag, another child of <dvt:gauge> and then the correct fill colour will be picked up from your threshold definitions.</p> <p>Happy gauge-ing! </p> Ever Wondered how uncommittedDataWarning Works? Duncan Mills-Oracle 2012-01-10T02:38:30+00:00 2012-01-10T02:38:30+00:00 <p>You may have come across the uncommittedDataWarning attribute on the <af:document> tag. With this attribute switched to "on" the framework will pop up a dialog like this when you try and navigate away from the page with the possibility of loosing the change: </p> <p><img src="" alt="Browser dialog for unsaved changes" /> </p> <p> What if you wanted to check yourself, in a programmable way or from an EL expression, against the same data so that you could, for example, popup your own dialog or mark a "save" menu item as enabled. Is it possible? Well yes of course and really very neat. Here's the code snippet (thanks to Dave S. who gave me this hint ages ago ) </p> <pre> ViewPortContext rootViewPort = ControllerContext.getInstance().getCurrentRootViewPort(); boolean uncommittedChanges = rootViewPort.isDataDirty();</pre> <p>This simple snippet will query all the transactional data controls on the page and in all regions in the page for their dirty status and deiver a simple boolean result to you.</p> An Enhanced Gauge control using HorizontalBar Duncan Mills-Oracle 2012-01-08T03:09:27+00:00 2012-01-08T03:20:56+00:00 <p:</p> <p><img src="" alt="Basic Gauge" /><br /></p> <p>The twist in this case was that we required an extra reference marker on the data bar that indicated the "optimal" value within a particular threshold. So in the image above imagine that within the green zone, 150 was the optimal value and we need to somehow indicate that.</p> :</p> <p><img src="" alt="Gauge with reference line" /> </p> . </p> <p:</p> <p><img src="" alt="Gauge using horizontal bar chart" /> </p> <p>As you can see it looks pretty similar, although there are some slight differences:</p> <p> </p> <ol> <li>Unlike the gauge, which displays value labels at the threshold boundaries, the axis on the chart has a regular labelling at fixed intervals based on the y1Axis setting.</li> <li.</li> <li>The proportions of the series bar / chart area are slightly different to the gauge. But that's only noticeable if you are mixing and matching.</li> </ol> <p> </p> <p> Let's break down how to create some of the features here:</p> <h4>Overall Size</h4> <p>The height / width of the cart had to be controlled somewhat to bring it down to gauge dimensions. This is acheived using <i>inlineStyle </i>on the horizontalBarGraph tag:</p> <pre><dvt:horizontalBarGraph </pre> <p>We also need to ensure that the y axis is fixed. By default it will be scaled based on the max value of the data which we don't want. To do this we define the min/max values on the nested y1Axis tag and set the <i>axisMaxAutoScaled</i> attribute to false. We also define the tickmark label interval to 30 here.</p> <pre><dvt:y1Axis </pre> <h4>Bar Styling</h4> <p.</p> <pre><dvt:seriesSet> <dvt:series </dvt:seriesSet></pre> <h4>Threshold banding</h4> <p>Next we want to add the banding to emulate the gauge thresholds. To do this we use the referenceObject tag with the RO_AREA <i>type</i> set to make it fill the defined area rather than draw a line. Again the referenceObject tags need to be enclosed in a parent, referenceObjectSet:</p> <pre>> </pre> <h4>The Reference Line </h4> <p:</p> <pre><dvt:referenceObject </pre> <p>The nice thing here is that we can flip this line so it overlays the series. This is not something we can do with Gauge:</p> <p> </p> <pre><dvt:referenceObject </pre> <p>Which gives us this:</p> <p><img src="" alt="Bar based gauge with overlay reference" /></p> <p> </p> <p>The <i>lineValue</i> attribute can, of course, be an EL expression rather than a hard-coded value so you can make the reference point dynamic.<br /></p> <h4>Using Alerts to Add Markers</h4> <p:</p> <p><img alt="Bar based gauge with overlay marker icons" src="" /></p> <p> </p> <p> </p> <p). <br /></p> <pre>></pre> <p>The xValue attribute maps the alert marker onto the required series bar.</p> <p>Finally, just for fun, see if you can work out how to do this one:</p> <p><img src="" alt="The puzzle" /> </p> <p>Answers in a comment please...</p> <h2>Final Thoughts</h2> <p>So should you use this technique rather than the out of the box gauge control? Well only of you <i>really</i>!</p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> | http://blogs.oracle.com/groundside/feed/entries/atom?cat=%2FADF | CC-MAIN-2016-30 | refinedweb | 14,307 | 50.06 |
This was not an error in 5.1 but is in the 6.0 EAP
#region Class SupportFunctions
//************************************************************
/// <summary>
/// Various Support Functions that can be used by various pieces of the UI
/// </summary>
///
/// <remarks></remarks>
///
//************************************************************
#endregion Class SupportFunctions
public class SupportFunctions
In the EAP R# flags it as an error and, worse, it's not configurable so I have no way to suppress this message (as lame as I think the pattern is, I'm not updating thousands of classes just to make R# happy).
I have two requests:
1) This should not be flagged as an analysis error.
2) This should be configurable (especially if #1 is not done).
Using EAP build 3/12 in VS2010 Ultimate.
Thanks,
Robert.
Hello Robert,
This is a known problem and you can monitor the status of the following request:. Thank you!
Andrey Serebryansky
Senior Support Engineer
JetBrains, Inc
"Develop with pleasure!" | https://resharper-support.jetbrains.com/hc/en-us/community/posts/206709005-XML-comment-is-not-placed-on-valid-element-shows-up-as-an-error | CC-MAIN-2019-22 | refinedweb | 149 | 61.77 |
Episode #91: Will there be a PyBlazor?
Published Wed, Aug 15, 2018, recorded Thurs, Aug 2, 2018.
Sponsored by Datadog pythonbytes.fm/datadog
Brian #1: What makes the Python Cool
- Shankar Jha
- “some of the cool feature provided by Python”
- The Zen of Python:
import this
- XKCD:
import antigravity
- Swapping of two variable in one line:
a, b = b, a
- Create a web server using one line:
python -m http.server 8000
collections
itertools
- Looping with index:
enumerate
- reverse a list:
list(reversed(a_list))
ziptricks
- list/set/dict comprehensions
- Modern dictionary
pprint
_when in interactive REPL
- Lots of great external libraries
Michael #2: Django 2.1 released
- The release notes cover the smorgasbord of new features in detail, the model “view” permission is a highlight that many will appreciate.
- Django 2.0 has reached the end of mainstream support. The final minor bug fix release (which is also a security release), 2.0.8, was issued today.
- Features
- model “view” feature: This allows giving users read-only access to models in the admin.
- The new
[ModelAdmin.delete_queryset()]()method allows customizing the deletion process of the “delete selected objects” action.
- You can now override the default admin site.
- Lots of ORM features
- Cache: The local-memory cache backend now uses a least-recently-used (LRU) culling strategy rather than a pseudo-random one.
- Migrations: To support frozen environments, migrations may be loaded from
.pycfiles.
- Lots more
Brian #3: Awesome Python Features Explained Using Harry Potter
- Anna-Lena Popkes
- Initial blog post
- 100 Days of code, with a Harry Potter universe bent.
- Up to day 18 so far.
Michael #4: Executing Encrypted Python with no Performance Penalty
- Deploying Python in production presents a large attack surface that allows a malicious user to modify or reverse engineer potentially sensitive business logic.
- This is worse in cases of distributed apps.
- Common techniques to protect code in production are binary signing, obfuscation, or encryption. But, these techniques typically assume that we are protecting either a single file (EXE), or a small set of files (EXE and DLLs).
- In Python signing is not an option and source code is wide open.
- requirements were threefold:
- Work with the reference implementation of Python,
- Provide strong protection of code against malicious and natural threats,
- Be performant both in execution time and in stored space
- This led to a pure Python solution using authenticated cryptography.
- Created a
.pycefile that is encrypted and signed
- Customized import statement to load and decrypt them
- Implementation has no overhead in production. This is due to Python's in-memory bytecode cache.
Brian #5: icdiff and pytest-icdiff
- icdiff: “Improved colored diff”
- Jeff Kaufman
- pytest-icdiff: “better error messages for assert equals in pytest”
- Harry Percival
Michael #6: Will there be a PyBlazor?
- The .NET guys, and Steve Sanderson in particular, are undertaking an interesting project with WebAssembly.
- WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.
- Works in Firefox, Edge, Safari, and Chrome
- Their project, Blazor, has nearly the entire .NET runtime (AKA the CLR) running natively in the browser via WebAssembly.
- This is notable because the CLR is basically pure C code. What else is C code? Well, CPython!
- Includes Interpreted and AOT mode:
- Ahead-of-time (AOT) compiled mode: In AOT mode, your application’s .NET assemblies are transformed to pure WebAssembly binaries at build time.
- Being able to run .NET in the browser is a good start, but it’s not enough. To be a productive app builder, you’ll need a coherent set of standard solutions to standard problems such as UI composition/reuse, state management, routing, unit testing, build optimization, and much more.
- Mozilla called for this to exist for Python, but sadly didn’t contribute or kick anything off at PyCon 2018:
- Gary Bernhardt’s Birth and Death of JavaScript video is required pre-reqs as well (
asm.js).
Extras and personal info:
Michael:
- Building data-driven web apps course is being well received
- Guido van Rossum: Python 3 retrospective — Guido’s final presentation as BDFL | https://pythonbytes.fm/episodes/show/91/will-there-be-a-pyblazor | CC-MAIN-2019-04 | refinedweb | 700 | 54.73 |
How To Make a Custom Control
Learn how to make a custom control like a slider that has two knobs
Update 8/27/14: Post now updated to iOS 8 and Swift, check it out!
Controls are one of the most important building blocks of any application. They serve as the graphical components that allow your users to view and interact with their application’s data. This tutorial shows you how to make a custom control that’s reusable.
Apple supplies around 20 controls, such as
UITextField,
UIButton and
UISwitch. Armed with this toolbox of pre-existing controls, you can create a great variety of user interfaces.
However, sometimes you need to do something just a little bit different; something that the other controls can’t handle quite the way you want.
As an example, say you’re developing an application for searching property-for-sale listings. This fictional application allows the user to filter search results so they fall within a certain price range.
You could provide an interface which presents the user with a pair of
UISlider controls, one for setting the maximum price and one for setting the minimum, as shown in the screenshot below:
However, this interface doesn’t really help the user visualize the price range. It would be much better to present a single slider with two knobs to indicate the high and low price range they are searching for, as shown here:
The above interface provides a much better user experience; the users can immediately see that they are defining a range of values.
Unfortunately, this slider control isn’t in the standard UI toolbox. To implement this functionality, you’d have create it as a custom control.
You could build this range slider by subclassing
UIView and creating a bespoke view for visualizing price ranges. That would be fine for the context of your app — but it would be a struggle to port it to other apps.
It’s a much better idea to make this new component generic so that you can reuse it in any context where it’s appropriate. This is the very essence of custom controls.
Custom controls are nothing more than controls that you have created yourself; that is, controls that do not come with the UIKit framework. Custom controls, just like standard controls, should be generic and versatile. As a result, you’ll find there is an active and vibrant community of developers who love to share their custom control creations.
In this tutorial, you will implement your very own RangeSlider custom control to solve the problems above. You’ll touch on such concepts as extending existing controls, designing and implementing your control’s API, and even how to share your new control with the development community.
Anyhow, enough theory. Time to start customizing!
Getting Started
This section will walk you through creating the basic structure of the control, which will be just enough to allow you to render a simple range slider on screen.
Fire up Xcode. Go to File\New\Project, select the iOS\Application\Single View Application template and click Next. On the next screen, enter CERangeSlider as the product name, and fill in the other details as in the image below:
Note that you’ll use Automatic Reference Counting (ARC) in this project but not Storyboards, since this is a single-page application. Also note that a Class Prefix is set for you in this project. You can omit the “Class Prefix”, but if you do, be aware that the auto-generated names for some files will be different from what’s specified in this tutorial.
Finally, feel free to use your own “Organization Name” and “Company Identifier”. When you’re done, click Next. Then choose a place to save the project and click Create.
The first decision you need to make when creating a custom control is which existing class to subclass, or extend, in order to make your new control.
Your class must be a
UIView subclass in order for it be available in the application’s UI.
If you check the Apple UIKit reference, you’ll see that a number of the framework controls such as
UILabel and
UIWebView subclass
UIView directly. However, there are a handful, such as
UIButton and
UISwitch which subclass
UIControl, as shown in the hierarchy below:
Note: For a complete class hierarchy of UI components, check out the UIKit Framework Reference.
The
UIControl class implements the target-action pattern, which is a mechanism for notifying subscribers of changes.
UIControl also has a few properties that relate to control state. You’ll be using the target-action pattern in this custom control, so
UIControl will serve as a great starting point.
Right-click the CERangeSlider group in the Project Navigator and select New File…, then select the iOS\Cocoa Touch\Objective-C class template and click Next. Call the class CERangeSlider and enter UIControl into the “subclass of” field. Click Next and then Create to choose the default location to store the files associated with this new class.
Although writing code is pretty awesome, you probably want to see your control rendered on the screen to measure your progress! Before you write any code for your control, you should add your control to the view controller so that you can watch the evolution of the control.
Open up CEViewController.m, and import the header file of your new control at the top of the file, as so:
Further down CEViewController.m, add an instance variable just below the
@implementation statement:
And further down still in CEViewController.m, replace the boiler-plate
viewDidLoad with the following:
The above three sections of code simply create an instance of your all-new control in the given frame and add it to the view. The control background color has been set to red so that it will be visible against the app’s background. If you didn’t set the control’s background to red, then the control would be clear — and you’d be wondering where your control went! :]
Build and run your app; you should see something very similar to the following:
Before you add the visual elements to your control, you’ll need a few properties to keep track of the various pieces of information that are stored in your control. This will form the start of your control’s Application Programming Interface, or API for short.
Note: Your control’s API defines the methods and properties that you decide to expose to the other developers who will be using your control. You’ll read about API design a little later in this article — for now, just hang tight!
Adding Default Control Properties
Open up CERangeSlider.h and add the following properties between the
@interface /
@end statements:
These four properties are all you need to describe the state of this control, providing maximum and minimum values for the range, along with the upper and lower values set by the user.
Well-designed controls should define some default property values, or else your control will look a little strange when it draws on the screen!
Open up CERangeSlider.m, locate
initWithFrame which Xcode generated for you, and replace it with the following code:
Now it’s time to work on the interactive elements of your control; namely, the knobs to represent the high and low values, and the track the knobs slide on.
Images vs. CoreGraphics
There are two main ways that you can render controls on-screen:
- Images – create images that represent the various elements of your control
- CoreGraphics – render your control using a combination of layers and CoreGraphics
There are pros and cons to each technique, as outlined below:
Images — constructing your control using images is probably the simplest option in terms of authoring the control — as long as you can draw! :] If you want your fellow developers to be able to change the look and feel of your control, you would typically expose these images as UIImage properties.
Using images provides the most flexibility to developers who will use your control. Developers can change every single pixel and every detail of your control’s appearance, but this requires good graphic design skills — and it’s difficult to modify the control from code.
Core Graphics — constructing your control using CoreGraphics means that you have to write the rendering code yourself, which will require a bit more effort. However, this technique allows you to create a more flexible API.
Using Core Graphics, you can parameterize every feature of your control, such as colours, border thickness, and curvature — pretty much every visual element that goes into drawing your control! This approach allows developers who use your control to easily tailor it to their needs.
In this tutorial you’ll use the second technique — rendering the control using CoreGraphics.
Note: Interestingly, Apple tend to opt for using images in their controls. This is most likely because they know the size of each control and don’t tend to want to allow too much customisation. After all, they want all apps to end up with a similar look-and-feel.
In Xcode, click on the project root to bring up the project settings page. Next, select the Build Phases tab and expand the Link Binary With Libraries section. Now click the plus (+) button at the bottom left of that section you just opened.
Either search for, or look down the list for, QuartzCore.framework. Select QuartzCore.framework and then click Add.
The reason you need to add the QuartzCore framework is because you’ll be using classes and methods from it to do manual drawing of the control.
This screenshot should help you find your way to adding the QuartzCore framework if you’re struggling:
Open up CERangeSlider.m and add the following import to the top of the file.
Add the following instance variables to CERangeSlider.m, just after the
@implementation statement:
These three layers —
_tracklayer, _upperKnobLayer, and
_lowerKnobLayer — will be used to render the various components of your slider control. The two variables
_knobWidth and
_useableTrackLength will be used for layout purposes.
Next up are some default graphical properties of the control itself.
In CERangeSlider.m, locate
initWithFrame: and add the following code just below the code you added to initialise the instance variables, inside the
if (self) { } block:
The above code simply creates three layers and adds them as children of the control’s root layer.
Next, add the following methods to CERangeSlider.m:
setLayerFrames sets the frame for both knob layers and the track layer based on the current slider values.
positionForValue maps a value to a location on screen using a simple ratio to scale the position between the minimum and maximum range of the control.
Build and run your app; your slider is starting to take shape! It should look similar to the screenshot below:
Your control is starting to take shape visually, but almost every control provides a way for the app user to interact with it.
For your control, the user must be able to drag each knob to set the desired range of the control. You’ll handle those interactions, and update both the UI and the properties exposed by the control.
Adding Interactive Logic
The interaction logic needs to store which knob is being dragged, and reflect that in the UI. The control’s layers are a great place to put this logic.
Right-click the CERangeSlider group in the Project Navigator and select New File…. Next, select the iOS\Cocoa Touch\Objective-C class template and add a class called CERangeSliderKnobLayer, making it a subclass of CALayer.
Open up the newly added header CERangeSliderKnobLayer.h and replace its contents with the following:
This simply adds two properties, one that indicates whether this knob is highlighted, and one that is a reference back to the parent range slider.
Next, change the type of the
_upperKnobLayer and
_lowerKnobLayer instance variables, by finding the
@implementation block and replacing their definitions with the following:
These layers can now be instances of the newly created
CERangeSliderKnobLayer class.
Still working in CERangeSlider.m, find
initWithFrame: and replace the
upperKnobLayer and
lowerKnobLayer layer creation code with the following:
The above code simply makes use of the newly added class to create the layers, and sets the layer’s
slider property to reference
self.
Build and run your project; check to see if everything still looks the same as detailed in the following screenshot:
Now that you have the slider layers in place using
CERangeSliderKnobLayer, you need to add the ability for the user to drag the sliders around.
Adding Touch Handlers
Open CERangeSlider.m and near the top of the file where the instance variables are defined, add the following, below the declaration of
_useableTrackLength:
This variable will be used to track the touch locations.
How are you going to track the various touch and release events of your control?
UIControl provides several methods for tracking touches. Subclasses of
UIControl can override these methods in order to add their own interaction logic.
In your custom control, you will override three key methods of
UIControl:
beginTrackingWithTouch,
continueTrackingWithTouch and
endTrackingWithTouch.
Add the following method to CERangeSlider.m:
The method above is invoked when the user first touches the control.
First, it translates the touch event into the control’s coordinate space. Next, it checks each knob to see whether the touch was within its frame. The return value for the above method informs the
UIControl superclass whether subsequent touches should be tracked.
Tracking touch events continues if either knob is highlighted. The call to
setNeedsDisplay ensures that the layers redraw themselves — you’ll see why this is important later on.
Now that you have the initial touch event, you’ll need to handle the events as the user moves their finger across the screen.
Add the following method to CERangeSlider.m:
Here’s a breakdown of the code above, comment by comment:
- First you calculate a delta, which determines the number of pixels the user’s finger travelled. You then convert it into a scaled value delta based on the minimum and maximum values of the control.
- Here you adjust the upper or lower values based on where the user drags the slider to. Note that you’re using a
BOUNDmacro which is a little easier to read than a nested
MIN/
MAXcall.
- This section sets the
disabledActionsflag inside a
CATransaction. This ensures that the changes to the frame for each layer are applied immediately, and not animated. Finally,
setLayerFramesis called to move the knob to the correct location.
You’ve coded the dragging of the slider — but you still need to handle the end of the touch and drag events.
Add the following method to CERangeSlider.m:
The above code simply resets both knobs to a non-highlighted state.
Build and run your project, and play around with your shiny new slider! It should resemble the screenshot below:
You’ll notice that when the slider is tracking touches, you can drag your finger beyond the bounds of the control, then back within the control without losing your tracking action. This is an important usability feature for small screen devices with low precision pointing devices — or as they’re more commonly known, fingers! :]
Change Notifications
So you now have an interactive control that the user can manipulate to set upper and lower bounds. But how do you communicate these change notifications to the calling app so that the app knows the control has new values?
There are a number of different patterns that you could implement to provide change notification: NSNotification, Key-Value-Observing (KVO), the delegate pattern, the target-action pattern and many others. There are so many choices!
What to do?
If you look at the UIKit controls, you’ll find they don’t use NSNotification or encourage the use of KVO, so for consistency with UIKit you can exclude those two options. The other two patterns — delegates and target-action patterns — are used extensively in UIKit.
Here’s a detailed analysis of the delegate and the target-action pattern:
Delegate pattern – With the delegate pattern you provide a protocol which contains a number of methods that are used for a range of notifications. The control has a property, usually named
delegate, which accepts any class that implements this protocol. A classic example of this is
UITableView which provides the
UITableViewDelegate protocol. Note that controls only accept a single delegate instance. A delegate method can take any number of parameters, so you can pass in as much information as you desire to such methods.
Target-action pattern – The target-action pattern is provided by the
UIControl base class. When a change in control state occurs, the target is notified of the action which is described by one of the
UIControlEvents enum values. You can provide multiple targets to control actions and while it is possible to create custom events (see
UIControlEventApplicationReserved) the number of custom events is limited to 4. Control actions do not have the ability to send any information with the event. So they cannot be used to pass extra information when the event is fired.
The key differences between the two patterns are as follows:
- Multicast — the target-action pattern multicasts its change notifications, while the delegate pattern is bound to a single delegate instance.
- Flexibility — you define the protocols yourself in the delegate pattern, meaning you can control exactly how much information you pass. Target-action provides no way to pass extra information and clients would have to look it up themselves after receiving the event.
Your range slider control doesn’t have a large number of state changes or interactions that you need to provide notifications for. The only things that really change are the upper and lower values of the control.
In this situation, the target-action pattern makes perfect sense. This is one of the reasons why you were told to subclass
UIControl right back at the start of this tutorial!
Aha! It’s making sense now! :]
The slider values are updated inside
continueTrackingWithTouch:withEvent:, so this is where you’ll need to add your notification code.
Open up CERangeSlider.m, locate
continueTrackingWithTouch:withEvent:, and add the following just before the “
return YES” statement:
That’s all you need to do in order to notify any subscribed targets of the changes.
Well, that was easier than expected!
Now that you have your notification handling in place, you should hook it up to your app.
Open up CEViewController.m and add the following code to the end of
viewDidLoad:
The above code invokes the
slideValueChanged each time the range slider sends the
UIControlEventValueChanged action.
Now add the following method to CEViewController.m:
This method simply logs the range slider values to the console window as proof that your control is sending notifications as planned.
Build and run your app, and move the sliders back and forth. You should see the control’s values in the output window, as in the screenshot below:
You’re probably sick of looking at the multi-coloured range slider UI by now. It looks like an angry fruit salad!
It’s time to give the control a much-needed facelift!
Modifying Your Control With CoreGraphics
First, you’ll update the graphics of the “track” that the sliders move along.
Right-click the CERangeSlider group in the Project Navigator and select New File…. Next, select the iOS\Cocoa Touch\Objective-C class template and add a class called CERangeSliderTrackLayer, making it a subclass of CALayer.
Open up the newly added file CERangeSliderTrackLayer.h, and replace its contents with the following:
The code above adds a reference back to the range slider, just as you did previously for the knob layer.
Open up CERangeSlider.m and add the following import to the top of the file:
A little further down CERangeSlider.m, locate the
_trackLayer instance variable and modify it to be an instance of the new layer class, as below:
Now find
initWithFrame: in CERangeSlider.m and update the layer creation code to match the following:
The code above ensures that the new track layer is used — and that the hideous background colors are no longer applied. :]
There’s just one more bit — remove the red background of the control.
Open up CEViewController.m, locate the following line in
viewDidLoad and remove it:
Build and run now…what do you see?
Do you see nothing? That’s good!
Good? What’s good about that? All of your hard work — gone?!?!
Don’t fret — you’ve just removed the gaudy test colors that were applied to the layers. Your controls are still there — but now you have a blank canvas to dress up your controls!
Since most developers like it when controls can be configured to emulate the look and feel of the particular app they are coding, you will add some properties to the slider to allow customization of the “look” of the control.
Open up CERangeSlider.h and add the following code just beneath the properties you added earlier:
The purposes of the various colour properties are fairly straightforward. And
curvaceousness? Well, that one is in there for a bit of fun — you’ll find out what it does shortly! :]
Finally,
positionForValue: is a method you wrote earlier. Here you’re just making it publicly accessible from the various layers.
You now need some default values for your control’s properties.
Open up CERangeSlider.m and add the following code to
initWithFrame:, just below the code that initializes the max, min, upper and lower values:
Next, open up CERangeSliderTrackLayer.m and add the following import to the top of the file:
This layer renders the track that the two knobs slide on. It currently inherits from
CALayer, which only renders a solid color.
In order to draw the track, you need to implement
drawInContext: and use the Core Graphics APIs to perform the rendering.
Note: To learn about Core Graphics in depth, the Core Graphics 101 tutorial series from this site is highly recommended reading, as exploring Core Graphics is out of scope for this tutorial.
Add the following method to CERangeSliderTrackLayer.m, just below the
@implementation statement:
As you can see, this has quite a sizeable chunk of Core Graphics code!
Have a look at the graphic below which shows how each of the commented sections are layered together:
The numbered sections above refer to the numbered code comments, which are explained as follows:
- Once the track shape is clipped, the background is filled in.
- The highlighted range is filled in next.
- A subtle highlight is added to the control to give it some depth.
- An inner shadow is painted inside the track.
- Finally, the shaded border of the track is rendered.
When it’s all broken out step-by-step, you can easily see how the various properties of
CERangeSlider affect the rendering of the track layer.
Build and run to see your shiny new track layer rendered in all its glory! It should look like the following:
Play around with the various values for the exposed properties to see how they affect the rendering of the control.
If you’re still wondering what
curvaceousness does, try changing that as well!
You’ll use a similar approach to render the knobs.
Open up CERangeSliderKnobLayer.m and add the following import to the top of the file:
Add the following method to CERangeSliderKnobLayer.m, just below the
@implementation statement:
Here’s another breakdown of the rendering steps, with each commented section explained below:
- Once a path is defined for the shape of the knob, the shape is filled in. Notice the subtle shadow which gives the impression the knob hovers above the track.
- The border is rendered next.
- A subtle gradient is applied to the knob.
- Finally, if the button is highlighted — that is, if it’s being moved — a subtle grey shading is applied.
Build and run once again; it’s looking pretty sharp and should resemble the screenshot below:
You can easily see that rendering your control using Core Graphics is really worth the extra effort. Using Core Graphics results in a much more versatile control compared to one that is rendered from images alone.
Handling Changes to Control Properties
So what’s left? The control now looks pretty snazzy, the visual styling is versatile, and it supports target-action notifications.
It sounds like you’re done — or are you?
Think for a moment about what happens if one of the range slider properties is set in code after it has been rendered. For example, you might want to change the slider range to some preset value, or change the track highlight to indicate a valid range.
Currently there is nothing observing the property setters. You’ll need to add that functionality to your control.
In order to detect when the control’s properties have been externally set, you’ll have to write your own setter implementation.
Your first inclination might be to add some code that looks like this:
When the
trackColor property is set, the above code informs the track layer that it needs to redraw itself.
But with eight properties on the range slider API, writing the same repetitive code over and over again is going to be somewhat of a chore.
However, you’re a custom control ninja — and code ninjas look to refactor and re-use code at every opportunity!
This looks like a job for a macro.
Open up CERangeSlider.m and add the following code just above
initWithFrame::
The above defines a macro which takes 4 parameters, and uses those parameters to generate a synthesized property and a property setter.
Again in CERangeSlider.m, add the following code directly below the previous macro:
The above code generates the setters for all eight in one fell swoop. As well, it invokes the setter method that updates each individual property.
redrawLayers is called for the properties that affect the control’s visuals, and
setLayerFrames is invoked for properties that affect the control’s layout.
That’s all you need to do in order to ensure the range slider reacts to property changes.
However, you now need a bit more code to test your new macros and make sure everything is hooked up and working as expected.
Open up CEViewController.m and add the following code to the end of
viewDidLoad:
This will invoke
updateState after a 1 second pause.
Add the following implementation of
updateState to CEViewController.m:
The above method changes the track highlight colour to red, and changes the shape of the range slider and its knobs.
Build and run your project, and watch the range slider change from this:
to this:
How easy was that?
Note: The code you just added to the view controller illustrates one of the most interesting, and often overlooked, points about developing custom controls – testing.
When you are developing a custom control, it’s your responsibility to exercise all of its properties and visually verify the results. A good way to approach this is to create a visual test harness with various buttons and sliders, each of which connected to a different property of the control. That way you can modify the properties of your custom control in real time — and see the results in real time.
Where To Go From Here?
Your range slider is now fully functional and ready to use within your own applications!
However, one of the key benefits of creating generic custom controls is that you can share them across projects — and share them with other developers.
Is your control ready for prime time?
Not just yet. Here are a few other points to consider before sharing your custom controls:
Documentation – Every developer’s favourite job! :] While you might like to think your code is beautifully crafted and self-documenting, other developers will no doubt disagree. A good practice is to provide public API documentation, at a minimum, for all publicly shared code. This means documenting all public classes and properties.
For example, your
CERangeSlider needs documentation to explain what it is — a slider which is defined by four properties:
max,
min,
upper, and
lower — and what it does — allows a user to visually define a range of numbers.
Robustness – What happens if you set the
upperValue to a value greater than the
maximumValue? Surely you would never do that yourself – that would be silly, wouldn’t it? But you can guarantee that someone eventually will! You need to ensure that the control state always remains valid — despite what some silly coder tries to do to it.
API Design – The previous point about robustness touches on a much broader topic — API design. Creating a flexible, intuitive and robust API will ensure that your control can be widely used, as well as wildly popular. At my company, ShinobiControls, we hold meetings that can last for hours where we debate every minor detail of our APIs!
API design is a topic of great depth, and one which is out of scope for this tutorial. If you are interested, Matt Gemmell’s 25 rules of API design comes highly recommended.
There are a number of places to start sharing your controls with the world. Here are few suggestions of places to start:
- GitHub – GitHub has become one of the most popular places to share open source projects. There are already numerous custom controls for iOS on GitHub. What’s great about GitHub is that it allows people to easily access your code and potentially collaborate by forking your code for other controls, or to raise issues on your existing controls.
- CocoaPods – To allow people to easily add your control to their projects, you can share it via CocoaPods, which is a dependency manager for iOS and OSX projects.
- Cocoa Controls – This site provides a directory of both commercial and open source controls. Many of the open source controls covered by Cocoa Controls are hosted on GitHub, and it’s a great way of promoting your creation.
- Binpress – This site provides both free and paid-for controls. You can often find what you’re looking for here, but if you don’t then why not make your control and then put it on here. You never know, people might be willing to buy it if you’ve written a clean, easy-to-use API!
Hopefully you’ve had fun creating this range slider control, and perhaps you have been inspired to create a custom control of your own. If you do, please share it in the comments thread for this article — we’d love to see your creations!
The source code for this control is available on GitHub with one commit for each ‘build and run’ step for this article. If you get lost, you can pick things up from the last step you were on! Neat! :]
You can download the complete range slider control project here.
9 Comments
beginTrackingWithTouch:withEvent: not called!
PS: not work in iOS 6.0 - but in iOS 5 all works, I think that this is a iOS v6 error
//edit: I found it myself. You just have to set
self.contentsScale = [UIScreen mainScreen].scale;
in all layer drawInContext methods.
//edit: I still have one problem: although
is called the graphics are not sharp before the first touch.
- Code: Select all
setNeedsDisplay ... eclaration | http://www.raywenderlich.com/36288/how-to-make-a-custom-control | CC-MAIN-2014-41 | refinedweb | 5,231 | 62.27 |
Why Git and SVN Fail at Managing Dataset Versions. The ability to manage a large number of datasets, their versions, and derived datasets, is a key foundational piece of a system we are building for facilitating collaborative data science, called DataHub. One of the key challenges in building such a system is keeping track of all the datasets and their versions, storing them compactly, and querying over them or retrieving them for analysis.
In the rest of the post, we focus on a specific technical problem: given a collection of datasets and any versioning information among them, how to store them and retrieve specific datasets. There are two conflicting optimization goals here:
Minimize the Total Storage Cost: Here we simply measure the total storage as the number of bytes required to store all the information. This has been the primary motivation for much of the work on storage deduplication, which is covered in-depth in a recent survey by Paulo and Pereira.
Minimize Recreation Cost, i.e., the Cost of Recreating Versions: This is somewhat trickier to define. First, we may want to minimize the average recreation cost over all datasets, or the maximum over them, or some weighted average (with higher weight given to newer versions). Second, how to measure the recreation cost is not always clear. Should we look at the wall-clock time, the amount of data that needs to be retrieved, or some other metric that considers the CPU cycles or network I/O needed, etc. For the rest of this post, we will assume that recreation cost is equal to the total number of bytes that need to be fetched from the disk. The algorithms that we develop in the paper are more general, and can handle arbitrary recreation costs.
The fundamental challenge here is the storage-recreation trade-off: the more storage we use, the faster it is to recreate or retrieve datasets, while the less storage we use, the slower it is to recreate or retrieve datasets. Despite the fundamental nature of this problem, there has been a surprisingly little amount of work on it.
The first figure below shows an example with 5 datasets (also referred to as files or objects below). Each vertex in the graph corresponds to a dataset and the associated number is the size of the dataset (in bytes). A directed edge from u to v is annotated with the size of the
delta required to reconstruct v from u. The delta required to construct
u from v is usually different from that required to construct v from u (the standard UNIX diff generates a union of those two directional deltas by default, but can be made to generate one-way deltas as well).
One extreme in terms of storing these datasets is to store each dataset by itself, ignoring any overlaps between them. This results in a total storage cost of 12000 and average recreation cost of 12000/5 = 2400. The other extreme is to store only one dataset by itself, and store the others using deltas from that one. The right figure above shows one example, where D1 is stored by itself; for D3 and D4, a delta from D1 is stored; for D2, a delta from D4 is stored, and finally for D5, a delta from D2 is stored. So to reconstruct D2, we have to first fetch D1 and the delta from D1 to D4 and reconstruct D4, and then fetch the delta from D4 to D2 and use that reconstruct D2. The total storage cost here is 3000+500+300+100+100=4000, much lower than the above option. But the recreation costs here are: (1) D1 = 3000, (2) D3 = 3000 + 500 = 3500, (3) D4 = 3300, (4) D2 = 3000 + 300 + 100 = 3400, and (5) D5 = 3500, with the average being 3340. Using multiple roots gives in-between solutions.
For a larger number of datasets, the differences between the two extremes are much higher. For one of the workloads we used in our paper containing about 100,000 versions with an average size of about 350MB, the storage requirements of the two extreme solutions were 34TB (store all datasets separately) vs 1.26TB (best storage space solution, and the average recreation costs were 350MB vs 115GB !!
Version Control Systems (VCS) like Git, SVN, or Mercurial, are also faced with this problem and they use different greedy heuristics to solve it. Some of these algorithms are not well-documented. Below we briefly discuss the solutions employed by git and svn, and then recap a couple of the experiments from our paper, where we discuss this overall problem and different solutions to it in much more detail.
Git repack
Git uses delta compression to reduce the amount of storage required to store a large number of files (objects) that contain duplicated information. However, git’s algorithm for doing so is not clearly described anywhere. An old discussion with Linus has a sketch of the algorithm. However there have been several changes to the heuristics used that don’t appear to be documented anywhere.
The following describes our understanding of the algorithm based on the latest git source code (Cloned from git repository on 5/11/2015, commit id: 8440f74997cf7958c7e8ec853f590828085049b8).
Here we focus on
repack, where the decisions are made for a large group of objects.
However, the same algorithm appears to be used for normal commits as well.
Most of the algorithm code is in file:
builtin/pack-objects.c
Sketch: At a high level, the algorithm goes over the files/objects in some order, and for each object, it greedily picks one of the prior W objects as its parent (i.e., it stores a delta from that object to the object under consideration).
As an example, let the order be D1, D4, D2, D3, D5 for the example above. D1 would then be picked as a root. For D4, there is only one option and we will store a delta from D1 to D4. For D2, we choose between D4 and D1; although the delta from
D4 is smaller, it also results in a longer
delta chain, so a decision is made based on a formula as discussed below.
Details follow.
Step 1
Sort the objects, first by
type, then by
name hash, and then by
size (in the decreasing order).
The comparator is (line 1503):
static int type_size_sort(const void *_a, const void *_b)
Note the
name hash is not a true hash; the
pack_name_hash() function simply creates a number from the last 16 non-white space characters, with the last characters counting the most (so all files with the same suffix, e.g.,
.c, will sort together).
Step 2
The next key function is
ll_find_deltas(), which goes over the files in the sorted order.
It maintains a list of W objects (W = window size, default 10) at all times.
For the next object, say $O$, it finds the delta between $O$ and each of the objects, say $B$, in the window; it chooses the the object with the minimum value of:
delta(B, O) / (max\_depth - depth of B)
where
max_depth is a parameter (default 50), and depth of B refers to the length of delta chain between a root and B.
The original repack algorithm appears to have only used
delta(B, O) to make the decision,
but the
depth bias (denominator) was added at a later point to give preference to smaller delta chains even if the corresponding delta was slightly larger.
The key lines for the above part:
line 1812 (check each object in the window):
ret = try_delta(n, m, max_depth, &mem_usage);
lines 1617-1618 (depth bias): ` max_size = (uint64_t)max_size * (max_depth - src->depth) / (max_depth - ref_depth + 1);`
line 1678 (compute delta and compare size): ` delta_buf = create_delta(src->index, trg->data, trg_size, &delta_size, max_size);`
create_delta() returns non-null only if the new delta being tried is smaller than the current delta (modulo depth bias),
specifically, only if the size of the new delta is less than
max_size argument.
Note: lines 1682-1688 appear redundant (the condition would never evaluate to true) given the depth bias calculations.
Step 3
Originally the window was just the last W objects before the object O under consideration. However, the current algorithm shuffles the objects in the window based on the choices made. Specifically, let b_1, …, b_W be the current objects in the window. Let the object chosen to delta against for O be b_i. Then b_i would be moved to the end of the list, so the new list would be: b_1, b_2, …, b_{i-1}, b_{i+1}, …, b_W, O, b_i. Then when we move to the new object after O (say O’), we slide the window and so the new window then would be: b_2, …, b_{i-1}, b_{i+1}, …, b_W, O, b_i, O’.
Small detail: the list is actually maintained as a circular buffer so the list doesn’‘t have to be physically shifted (moving b_i to the end does involve a shift though). Relevant code here is lines 1854-1861.
Finally we note that git never considers/computes/stores a delta between two objects of different types, and it does the above in a multi-threaded fashion, by partitioning the work among a given number of threads. Each of the threads operates independently of the others.
SVN
SVN also uses delta compression, using a technique called
skip-deltas that ensures that the delta chains are never too long (specifically, no more than logarithmic in the number of versions).
The specifics depend on which backend is being used, and a detailed discussed can be found on the SVN website.
This technique does not look at the file contents when making the decisions of which deltas to use, and can require very high storage for long version chains. It is also not clear
how to use this technique when there is no clear versioning relationship between the files.
Comparison with SVN and Git
We discuss the results of one experiment here, to illustrate the inefficiencies of the existing solutions and the benefits of careful optimizations.
We take 100 forks of the
Linux repository, and For each of those, we checkout the latest version and concatenate all files in it (by traversing the directory structure in lexicographic order).
Thereafter, we compute deltas between all pairs of versions in a repository. This gives us a workload with 100 files, with an average size of 422MB.
SVN: We create an FSFS-type repository in SVN (v1.8.8), which is more space efficient that a Berkeley DB-based repository. We then
import the entire LF dataset into the repository in a single commit. The amount of space occupied by
the
db/revs/ directory is around 8.5GB and it takes around 48 minutes to complete the import.
We contrast this with the naive approach of applying a
gzip on the files which results in
total compressed storage of 10.2GB.
The main reason behind SVN’s poor performance is its use of skip-deltas to ensure that at most
O(log n) deltas are needed for reconstructing
any version; that tends to lead it to repeatedly store redundant delta information as a result of
which the total space requirement increases significantly.
Git: In case of Git (v1.7.1), we add and commit the files in the repository and then run a
git repack -a -d --depth=50 --window=50 on the repository.
The size of the Git pack file is 202 MB
although the repack consumes 55GB memory and takes 114 minutes (for higher window sizes, Git fails to
complete the repack as it runs out of memory).
MCA: In comparison, the solution found by the MCA algorithm (i.e., the optimal storage solution as found by our system) occupies 159MB using
xdiff from LibXDiff library for computing the deltas;
xdiff also forms the basis of Git’s delta computing
routine. The total time taken is around 102 minutes; this includes the time taken to compute the
deltas and then to find the MCA for the corresponding graph.
Discussion
In our paper linked above, we present a much more detailed experimental comparisons over several other datasets, where we evaluate the trade-offs between storage and recreation costs more thoroughly; one of the algorithms we compare with is our reimplementation of the above git heuristic. Our detailed experiment shows that our implementation of that heuristic (GitH) required more storage than our proposed heuristic, called LMG, for guaranteeing similar recreation costs. | http://www.cs.umd.edu/~amol/DBGroup/2015/06/26/datahub.html | CC-MAIN-2017-47 | refinedweb | 2,085 | 65.46 |
Dear Members,
I want to know are there any methods to read the URL of various links of a web page. To give you clarity, I wish to give you a realistic example. In the web site, "", there are many links in the form of Finance, Games, Life Style, News etc.
If you place the mouse pointer on any of those links, you can see the URL associated with it in the status bar at the bottom of the web browser. For instance, if you place the mouse pointer on the link "Games", you can see the URL "" displayed on the status bar at the bottom of the IE browser.
Similarly, if you place mouse pointer on the link of "Life Style", you can see the URL "" displayed on status bar at the bottom of the IE browser. In that web page, there are so many such links available.
My wish is that I want to write a Java Program (something like public class GrabURLs) that takes the URL of any web page (not necessarily, "", it can be any web page) in its constructor. From the URL which is passed in the constructor, the program has to find whether any links are available in that web page; if so, the program should grab all the links contained in that page in the form of String array or Vector.
For example, I write code something like :
GrabURLs webPage = new GrabURLs("");
String[] links = webPage.getLinks();
The links array is supposed to contain elements such as links[0] = "", links[1] = "" and so on.
Now what source code will I write for the method getLinks() of the class GrabURLs.
I would be delighted if someone gives a solution. The solution need not be a full Java program; at least, I want to know what are all the classes and methods involved to achieve this challenging task.
With best regards,
Abitha. | http://www.javaprogrammingforums.com/%20java-theory-questions/945-how-read-urls-web-page-printingthethread.html | CC-MAIN-2014-42 | refinedweb | 316 | 84.1 |
A light-weight event bus library for Dart implementing the pub-sub pattern.
A simple usage example:
import 'package:bus/bus.dart'; class Event { final DateTime timestamp; Event() : this.timestamp = new DateTime.now(); } main() async { // Create a new bus, which accepts messages of type Event. var bus = new Bus<Event>(); // Subscribe a single handler bus.subscribe((Event event) { print('An event occurred at ${event.timestamp}.'); }); // Post the event and (optional) await for handlers to receive them await bus.post(new Event()); }
Also supported is subscribing a class full of handlers:
class GameListener implements Listener { @handler _onGame(GameEvent event) { print('[An event occurred at ${event.timestamp}]'); } @handler _onChat(ChatEvent event) { print('${event.username} says "${event.message}"'); } } ... bus.subscribeAll(new GameListener());
See the game example for explicit details.
Please file feature requests and bugs at the issue tracker.
Add this to your package's pubspec.yaml file:
dependencies: bus: ^0.0.3
You can install packages from the command line:
with pub:
$ pub get
Alternatively, your editor might support
pub get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:bus. | https://pub.dev/packages/bus/versions/0.0.3 | CC-MAIN-2019-22 | refinedweb | 189 | 62.44 |
4, 2007
This article was contributed by Daniel Drake
When developing kernel code, it is usually important to consider
constraints and requirements of architectures other than your
own. Otherwise, your code may not be portable to other architectures, as I
recently discovered when an unaligned memory access bug was reported
in a driver which I develop. Not having much familiarity with the concepts
of unaligned memory access, I set out to research the topic and complete my
understanding of the issues.
Certain architectures rule that memory
accesses must meet some certain alignment criteria or are otherwise
illegal. The exact criteria that determines whether an access is suitably
aligned depends upon the address being accessed and the number of bytes
involved in the transaction, and varies from architecture to architecture.
Kernel code is typically written to obey natural alignment
constraints, a scheme that is sufficiently strict to ensure portability to
all supported architectures. Natural alignment requires that every N byte
access must be aligned on a memory address boundary of N. We can express
this in terms of the modulus operator: addr % N must be
zero. Some examples:
addr % N
0x10004 % 4 = 0
0x10005 % 4 = 1
movb
movw
movl
void example_func(unsigned char *data) {
u16 value = *((u16 *) data);
[...]
}
The effects of unaligned access vary from architecture to
architecture. On architectures such as ARM32 and Alpha, a processor
exception is raised when an unaligned access occurs, and the kernel is able
to catch the exception and correct the memory access (at large cost to
performance). Other architectures raise processor exceptions but the
exceptions do not provide enough information for the access to be
corrected. Some architectures that are not capable of unaligned access do
not even raise an exception when unaligned access happens, instead they
just perform a different memory access from the one that was requested and
silently return the wrong answer.
Some architectures are capable of performing unaligned accesses without
having to raise bus errors or processor exceptions, i386 and x86_64 being
some common examples. Even so, unaligned accesses can degrade performance
on these systems, as Andi Kleen explains:
At the end of the day, if you write code that causes unaligned accesses
then your software will not work on some systems. This applies to both
kernel-space and userspace code.
The theory is relatively easy to get to grips with, but how does this apply
to real code? After all, when you allocate a variable on the stack, you
have no control over its address. You don't get to control the addresses
used to pass function parameters, or the addresses returned by the memory
allocation functions. Fortunately, the compiler understands the alignment
constraints of your architecture and will handle the common cases just
fine; it will align your variables and parameters to suitable boundaries,
and it will even insert padding inside structures to ensure the access to
members is suitably aligned. Even when using the GCC-specific packed
attribute (which tells GCC not to insert padding), GCC will
transparently insert extra instructions to ensure that standard accesses to
potentially unaligned structure members do not violate alignment
constraints (at a cost to performance).
packed
In order to illustrate a situation that might cause unaligned memory
access, consider the example_func() implementation from
above. The first line of the function accesses two bytes (16 bits) of data
from a memory address passed in as a function parameter; however, we do not
have any other information about this address. If the data
parameter points to an odd address (as opposed to even), for example
0x10005, then we end up with an unaligned access. The main
places where you will potentially run into unaligned accesses are when
accessing multiple bytes of data (in a single transaction) from a pointer,
and when casting variables to types of increased lengths.
example_func()
data
0x10005
Conceptually, the way to avoid unaligned access is to use byte-wise memory
access because accessing single bytes of memory cannot violate alignment
constraints. For example, for a little-endian system we could replace the
example_func() implementation with the following:
void fixed_example_func(unsigned char *data) {
u16 value = data[0] | data[1] << 8;
[...]
}
void fixed_example_func(unsigned char *data) {
u16 value = data[0] | data[1] << 8;
[...]
}
memcpy() is another possible alternative in the general case,
as long as either the source or destination is a pointer to an 8-bit data
type (i.e. char). Inside the kernel, two macros are provided
which simplify unaligned accesses: get_unaligned() and
put_unaligned(). It is worth noting that using any of these
solutions is significantly slower than accessing aligned memory, so it is
wise to completely avoid unaligned access where possible.
memcpy()
char
get_unaligned()
put_unaligned()
Another option is to simply document the fact that
example_func() requires a 16-bit-aligned data parameter, and
rely on the call sites to ensure this or simply not use the
function. Linux's optimized routine for comparing two ethernet addresses
(compare_ether_addr()) is a real life example of this: the
addresses must be 16-bit-aligned.
compare_ether_addr()
I have applied my newfound knowledge to the task of writing some kernel
documentation, which covers this topic in more detail. If you want to learn
more, you may want to read the most recent
revision (as of this writing) of the document. Additionally, the initial
revision of the document generated a lot of interesting discussion, but
be aware that the initial attempt contained some mistakes. Finally, chapter
11 of Linux Device Drivers
touches upon this topic.
I'd like to thank everyone who helped me improve my understanding of
unaligned access, as this article would not have been possible without
their assistance.
Memory access and alignment
Posted Dec 6, 2007 5:00 UTC (Thu) by cventers (subscriber, #31465)
[Link]
One place where alignment does matter on x86 is in SMP. As noted by the
glibc documentation, aligned word-sized reads and writes are atomic on all
known POSIX platforms. If you respect memory visibility issues, there are
certain ways you can exploit this fact to avoid the overhead of locks. In
fact, if you notice, the kernel's atomic_t type is pretty straightforward
on most platforms - especially the simple read and store operations. The
only requirement is alignment and then it is atomic for free.
Posted Dec 6, 2007 21:12 UTC (Thu) by dsd (guest, #49212)
[Link]
A slightly tweaked version of the document has now been submitted for inclusion with the
kernel documentation. You can read it here:
Posted Dec 9, 2007 18:28 UTC (Sun) by oak (guest, #2786)
[Link]
On some ARM hardware some of the unaligned accesses may not provide an
exception, see "Unaligned memory access" here:
Here's an idea
Posted Dec 9, 2007 23:27 UTC (Sun) by pr1268 (subscriber, #24648)
[Link]
Here's an idea: Let's all go back to 8-bit architectures and we won't have this problem anymore. ;-)
Okay, that was my one dorky sarcastic comment for the day.
Seriously, I'm curious about what happens without programmer intervention: Recently I had to code for a struct that looked like this (a similar example is given in the packed attribute link in the article):
struct my_object {
uint32_t a;
char c;
char filler[3];
uint32_t* p1;
char** p2;
char** p3;
};
I'm using a 32-bit computer, so I know all pointers occupy 4 bytes. Deal is, the char filler[3] array was not going to be used in any shape or form in my program, but I instinctively put it there to pad the whole structure to a multiple of 4 bytes. Would GCC have done that for me automatically if I had not included the char filler[3]? Or, would GCC have re-arranged things had I moved the char filler[3] to the bottom of the structure (leaving char c where it is)? How does the -Os optimization affect this? Thanks!
Padding structures elsewhere
Posted Dec 9, 2007 23:34 UTC (Sun) by pr1268 (subscriber, #24648)
[Link]
By the way, in a prior life I programmed mainframes in COBOL where we used fixed-length records. Thus explaining my choice of the identifier filler. Filler padding gets interesting in COBOL when working with packed-decimal numbers (not to mention the joys of a S0C7 exception), but I digress...
Posted Dec 10, 2007 7:25 UTC (Mon) by im14u2c (subscriber, #5246)
[Link]
Yes. On an architecture with alignment constraints, the "packed[3]" field isn't necessary. The compiler will insert padding. You can check this out with the offsetof() macro.
For example, if I compile the following program on my 64-bit Opteron, you can see the pointers all get aligned to 8 byte boundaries like they're supposed to. If I compile it on a 32 bit machine, they get aligned to 4 byte boundaries. This is regardless of whether that filler field is there.
#include <stdio.h>
#include <stddef.h>
typedef unsigned int uint32_t;
typedef struct obj_1
{
uint32_t a, b;
char c;
char filler[3];
uint32_t* p1;
char** p2;
char** p3;
} obj_1;
typedef struct obj_2
{
uint32_t a, b;
char c;
uint32_t* p1;
char** p2;
char** p3;
} obj_2;
int main()
{
printf("offset of obj_1.a: %5d bytes\n", offsetof(obj_1, a));
printf("offset of obj_1.b: %5d bytes\n", offsetof(obj_1, b));
printf("offset of obj_1.c: %5d bytes\n", offsetof(obj_1, c));
printf("offset of obj_1.filler: %5d bytes\n", offsetof(obj_1, filler));
printf("offset of obj_1.p1: %5d bytes\n", offsetof(obj_1, p1));
printf("offset of obj_1.p2: %5d bytes\n", offsetof(obj_1, p2));
printf("offset of obj_1.p3: %5d bytes\n", offsetof(obj_1, p3));
putchar('\n');
printf("offset of obj_2.a: %5d bytes\n", offsetof(obj_2, a));
printf("offset of obj_2.b: %5d bytes\n", offsetof(obj_2, b));
printf("offset of obj_2.c: %5d bytes\n", offsetof(obj_2, c));
printf("offset of obj_2.p1: %5d bytes\n", offsetof(obj_2, p1));
printf("offset of obj_2.p2: %5d bytes\n", offsetof(obj_2, p2));
printf("offset of obj_2.p3: %5d bytes\n", offsetof(obj_2, p3));
putchar('\n');
printf("sizeof(int) = %d bytes\n", sizeof(int));
printf("sizeof(long) = %d bytes\n", sizeof(long));
printf("sizeof(void*) = %d bytes\n", sizeof(void*));
return 0;
}
Output on a 32-bit machine:
offset of obj_1.a: 0 bytes
offset of obj_1.b: 4 bytes
offset of obj_1.c: 8 bytes
offset of obj_1.filler: 9 bytes
offset of obj_1.p1: 12 bytes
offset of obj_1.p2: 16 bytes
offset of obj_1.p3: 20 bytes
offset of obj_2.a: 0 bytes
offset of obj_2.b: 4 bytes
offset of obj_2.c: 8 bytes
offset of obj_2.p1: 12 bytes
offset of obj_2.p2: 16 bytes
offset of obj_2.p3: 20 bytes
sizeof(int) = 4 bytes
sizeof(long) = 4 bytes
sizeof(void*) = 4 bytes
Output on a 64-bit machine:
offset of obj_1.a: 0 bytes
offset of obj_1.b: 4 bytes
offset of obj_1.c: 8 bytes
offset of obj_1.filler: 9 bytes
offset of obj_1.p1: 16 bytes
offset of obj_1.p2: 24 bytes
offset of obj_1.p3: 32 bytes
offset of obj_2.a: 0 bytes
offset of obj_2.b: 4 bytes
offset of obj_2.c: 8 bytes
offset of obj_2.p1: 16 bytes
offset of obj_2.p2: 24 bytes
offset of obj_2.p3: 32 bytes
sizeof(int) = 4 bytes
sizeof(long) = 8 bytes
sizeof(void*) = 8 bytes
Posted Dec 10, 2007 7:40 UTC (Mon) by im14u2c (subscriber, #5246)
[Link]
Oh, and bit-fields (that is, fields of somewhat arbitrary bit widths) tend to be based around the size of "int" on a given machine. That is, they tend to be word oriented. The fields pack together to form words, and a field doesn't straddle two words.
Note that I say "tend to." Bit field layout and struct layout are actually ABI issues (ABI == Application Binary Interface). For example, here's the SVR4 i386 ABI. Take a look starting at page 27. In the case of the SVR4 ABI, it appears bitfields are actually packed in terms of their base type. I believe the latest C standard only wants you to use signed int, unsigned int and _Bool, though.
Bitfields in C++
Posted Dec 10, 2007 21:31 UTC (Mon) by pr1268 (subscriber, #24648)
[Link]
Bjarne Stroustrup discusses a C++ Standard Template Library (STL) vector of bools (The C++ Programming Language, Special Edition, p. 458) - this is designed to overcome the wasted space of a 1-bit data structure taking up 16, 32, or 64 bits of memory.
Posted Dec 11, 2007 14:24 UTC (Tue) by im14u2c (subscriber, #5246)
[Link]
Keep in mind that a bitvector (sometimes called a bitmap) is something rather different than a bitfield member of a struct or class. Bitvectors have array-like semantics and are quite typically used to represent things such as set membership. (eg. a 1 indicates membership, a 0 indicates lack of membership.) Bitfield members, on the other hand, are scalar quantities. There is no indexing associated with them, and their range is often much more than 0 to 1.
Bitfield members are often useful for storing values with limited range in a compact manner. For example, consider dates and times. The month number only goes from 1-12 and the day number only goes from 1-31. You can store those in 4 and 5 bits respectively. If you store year-1900 instead of the full 4 digit number, you can get by with 7 or 8 bits there. Hours fit in 5 bits, minutes and seconds in 6 bits each. That leads to something like this:
struct packed_time_and_date
{
unsigned int year : 7; /* 7 bits for year - 1900 */
unsigned int month : 4; /* 4 bits for month (1 - 12) */
unsigned int day : 5; /* 5 bits for day (1 - 31) */
unsigned int hour : 5; /* 5 bits for hour (0 - 23) */
unsigned int minute : 6; /* 6 bits for minute (0 - 59) */
unsigned int second : 6; /* 6 bits for second (0 - 59) */
};
Now, if I did my math right, that adds up to 33 bits. Under the x86 UNIX ABI, the compiler will allocate 2 32-bit words for this. The first 5 fields will pack into the first 32-bit word, and the 6th field will be in the lower 6 bits of the second word. That is, sizeof(struct packed_time_and_date) will be 8. If you can manage to squeeze a bit out of the year (say, limit yourself to a smaller range of years), this fits into 32 bits, and sizeof(struct packed_time_and_date) will be 4. In either case, the compiler will update these fields-packed-within-words with an appropriate sequence of loads, shifts, masks, and stores. No C++ or STL necessary.
(If you want to see a more in-depth (ab)use of this facility, check out this source file. It constructs a union of structs, each struct modeling a different opcode space in an instruction set.)
Anyhow, this is what I was referring to as "bit-fields" in the comment you replied to. As you can see, it's rather different than bitvectors. :-)
Bitvectors in C++ and STL have their own problems. I've been reading up on C++ and STL, and vectors of bits apparently have a checkered history in STL. There was one implementation (I believe at SGI) of bitvector (separate of vector<bool>) that didn't make it into the standard, and yet another (apparently a specialization of vector<> especially for bool) that many feel doesn't actually provide proper iterators. I wish I could provide references or details.
All I know is that it's enough for me to avoid STL on bitvectors, and just resort to coding these manually. It's served me well so far. And I don't see a need to invoke generics like sort on them. Something tells me most generic algorithms aren't as interesting on bitvectors. ;-)
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/260832/ | crawl-002 | refinedweb | 2,643 | 62.48 |
I'm confused on how to go about styling the 2 'confirm' and 'cancel' buttons within Angular's primeng library when using the ConfirmDialog.
ref:
I'd like to make the 'confirm' button remain green, and change style the 'cancel' button red.
Changing the styling within the css for:
.ui-widget-header .ui-button, .ui-widget-content .ui-button, .ui-button
changes colors on both buttons. Is there a way around this?
You could use the CSS Adjacent Sibling Selector to target the buttons, this assumes there will only be two buttons, the confirm and cancel:
.ui-dialog-footer .ui-button { background: /* some green color */ } .ui-dialog-footer .ui-button + .ui-button { background: /* some red color */ }
The buttons seem to be in a container div with CSS class
.ui-dialog-footer when trying the demo in the link you provided. However if your implementation has the buttons in a different container/namespace, you can replace
.ui-dialog-footer with whatever you'd need to prevent the styles from affecting ALL buttons in your application.
Here is a jsfiddle demonstrating the functionality in action.
Hopefully that helps! | https://codedump.io/share/fjBJOinHbbOf/1/how-do-i-style-the-confirmation-buttons-within-primeng39s-confirmdialog-modal | CC-MAIN-2017-39 | refinedweb | 186 | 50.33 |
Paul Prescod wrote: > ... > > > > > > Well let's put it this way: XML 1.0 uses PIs. XML 1.0 *defines* PIs. That is very different. > So does the stylesheet > binding extension (for CSS and XSL). This is what I was looking for: the *use* of a PI. Per my other email (treatise? :-), I think that I've discovered we are operating within two classes of applications: * data-oriented use of XML * layout-oriented use of XML For the former, I have not seen a case where a PI is necessary. For the latter: yes, you need a PI for stylesheets. Too bad... you get to use the DOM :-) > I don't doubt that namespaces are important but they can easily be viewed > as an extension of (or layer on top of) the minimal API. Nope. Namespaces are critical, as Fredrik has pointed out. My endeavors to use namespaces within the DOM style of programming has also led me to believe that it isn't a simple extension or layer on top of a minimal API. Why? Well... if you attempt to post-process the namespace information, then where do you store it? The client that is doing the post-processing only receives *proxy* objects. It cannot drop the information there since those objects are *not* persistent. Instead, the client has to reach into the internals of the DOM to set (and get!) the namespace info. Bleck! > There are four objects there. If we want it to be a tree we need a wrapper > object that contains them. You could argue that in the lightweight API the > version and doctype information could disappear but surely we want to > allow people to figure out what stylesheets are attached to their > documents! I maintain that the stylesheets are not applicable to certain classes of XML processing. So yes, they get punted too. A simple API of elements and text is more than suitable. > > NodeType is bogus. It should be absolutely obvious from the context what a > > Node is. If you have so many objects in your system that you need NodeType > > to distinguish them, then you are certainly not a light-weight solution. > > XML is a dynamically typed language, like XML. If I have a mix of > elements, characters and processing instructions then I need some way of > differentiating them. I don't feel like it is the place of an API to > decide that XML is a strongly typed language and silently throw away > important information from the document. Hello? It *is* the place of the API to define semantics. That is what APIs do. I can understand if you don't like this particular semantic, but I feel your argument is deeply flawed. > > > Document.DocumentElement (an element node property) > > > > If Document has no other properties, then it is totally bogus. Just return > > the root Element. Why the hell return an object with a single property > > that refers to another object? Just return that object! > > Document should also have ChildNodes. Your spec didn't show it. Okay... so it has ChildNodes. How do you get the root element? Oops. You have to scan for the thing. Painful! > > If you want light-weight, then GetAttribute is bogus given that the same > > concept is easily handled via the .Attributes value. Why introduce a > > method to simply do Element.Attributes.get(foo) ?? > > GetAttribute is simpler, more direct and maybe more efficient in some > cases. It works with simple strings and not attribute objects. It will *never* be more efficient. Accessing a Python attribute and doing a map-fetch will always be faster than a method call. Plain and simple. (caveat: as I mentioned in prior posts, qp_xml should be using a mapping rather than a list of objects... dunno what I was thinking) > > > Element.TagName > > > Element.PreviousSibling > > > Element.NextSibing > > > > These Sibling things mean one of two things: > > > > 1) you have introduced loops in your data structure > > 2) you have introduced the requirement for the proxy crap that the current > > DOM is dealing with (the Node vs _nodeData thing). > > > > (1) is mildly unacceptable in a light-weight solution (you don't want > > people to do a quick parse of data, and then require them to follow it up > > with .close()). > > I don't see this as a big deal. > > This is an efficiency versus simplicity issue. These functions are > extremely convenient in a lot of situations. The origin of qp_xml was for efficiency first, simplicity second. I maintain that qp_xml provides both. I will agree to disagree that parents and siblings are useful. (IMO, they are not, and only serve to complicate the system). > > Case in point: I wrote a first draft davlib.py against the DOM. Damn it > > was a serious bitch to simply extract the CDATA contents of an element! > > XML is a dynamically typed language. "I've implemented Java and now I'm > trying to implement Python and I notice that you guys through these > PyObject things around and they make my life harder. I'm going to dump > them from my implementation." Again, back to this "dynamically typed language". That is your point of view, rather than a statement of fact. I won't attempt to characterize how you derived that point of view (from the DOM maybe?), but it is NOT the view that I hold. XML is a means of representing structured data. That structure takes the form of elements (with attributes) and contained text. I do not see how XML is a programming langauge, or that it is dynamically typed. It is simply a representation in my mind. And I'll ignore the quote which just seems to be silliness or flamebait... > > Moreover, it was also a total bitch to simply say "give me the child > > elements". Of course, that didn't work since the DOM insisted on returning > > a list of a mix of CDATA and elements. > > It told you what was in your document. I also get that from qp_xml with a lot less hassle, so that says to me that the DOM is introducing needless complexity/hassle for the client. > If you want to include helper functions to do this stuff then I say fine: > but if you want to throw away the real structure of the document then I > don't think that that is appropriate. Helper functions are simply a mechanism to patch the inherent complexity introduced by the DOM. It does not need to be so complicated. Python has excellent mechanisms to hold structured data; qp_xml uses them to provide excellent benefit (relative to the DOM). The only "structure" that I toss are PIs and comments. I do not view those as "structure". The contents (elements, attributes, text) are retained and can be reconstructed from the structure that qp_xml returns. > > IMO, the XML DOM model is a neat theoretical expression of OO modelling of > > an XML document. For all practical purposes, it is nearly useless. (again: > > IMO) ... I mean hey: does anybody actually use the DOM to *generate* XML? > > Screw that -- I use "print". I can't imagine generating XML using the DOM. > > Complicated and processing intensive. > > I'm not sure what your point is here. I wouldn't use the DOM *or* qp_xml > to generate XML in most cases. As you point out "print" or "file.write" is > sufficient in most applications. This has nothing to do with the DOM and > everything to do with the fact that writing to a file is inherently a > streaming operation so a tree usually gets in the way. Most of the DOM's interface is for *building* a DOM structure. It is conceivable that those APIs only exist as a way to response to parsing events, but I believe their existence is due to the fact that people want to build a DOM and then generate the resulting XML. Otherwise, we could have had two levels of the DOM interface: read-only (with private construction mechanisms), and read-write (as exemplified by the current DOM). I believe that the notion of build/generate via the DOM is bogus. It seems you agree :-), and that print or file.write is more appropriate. Fredrik has some utility objects to do it. All fine. The DOM just blows :-) > > Sorry to go off here, but the DOM really bugs me. I think it is actually a > > net-negative for the XML community to deal with the beast. I would love to > > be educated on the positive benefits for expressing an XML document thru > > the DOM model. > > I think that the DOM is broken for a completely different set of reasons > than you do. But the DOM is also hugely popular and more widely > implemented than many comparable APIs in other domains. I'm told that I could care less about compatibility. I'm trying to write an application here. Geez... using your viewpoint: if I wanted compatibility, then maybe I should use Java or C since everybody else uses that. > Microsoft's DOM impelementation is referenced in dozens of their products > and throughout many upcoming technologies. Despite its flaws, the DOM is > an unqualified success and some people like it more than XML itself. They > are building DOM interfaces to non-XML data! Goody for them. That doesn't help me write my application. > > Use a mapping. Toss the intermediate object. If you just have name and > > value, then you don't need separate objects. Present the attributes as a > > mapping. > > In this case I am hamstrung by DOM compatibility. This is a small price to > pay as long as we keep the simpler GetAttribute methods. The only reason > to get the attribute objects is when you want to iterate over all > attributes which is probably relatively rare. This is why I say "toss the DOM". Help your client programmers, rather than be subserviant to the masses distorted view of XML programming :-) Cheers, -g -- Greg Stein, | https://mail.python.org/pipermail/xml-sig/1999-April/001124.html | CC-MAIN-2019-51 | refinedweb | 1,646 | 75.5 |
!! BE AWARE !! this is just the front end for the process. I may dabble in the code to fetch a list of emails or send an email in a future tutorial. Thankfully the way this is broken up allows for easy insertion of that code in the future.
Software:
-- Visual Studios 2017
Concepts:
-- C#
-- Core 2.1 / Razor pages
-- Entity Framework
-- Test Driven Development
-- Asynchronous
-- GULP
-- NPM
github link:
A good project needs a good plan. Thinking about data objects I can see an 'email' object as well as a 'folder' object. The 'email' would need an id, to, from, subject, body, and date. If I use this object in a collection I will need to visually show the user which is 'selected' so let's add that to the list.
The 'folder' would be simple.. an id, name, and selected boolean.
Per my usual work flow I will have a single 'data access' class everything taps into. That sort of decoupling helps in a few ways, but the biggest is I can have my aysnc functions return test data and the rest of the app doesn't care. For all the ModelView is concerned it could be from a db, a file, or even web calls to a pop email server. This greatly helps in testing and narrowing focus for future updates.
I am seeing a call to get a list of folders, a call to get a list of emails for said folder, and an area to write emails/see the details of a specific email.
================================
With a general direction let's go about creating the project and get the required scaffolding in place.
I am using my general setup from here:
With my index.cshtml plunked into my 'Pages' folder I start with my data classes.
In the created 'Data' folder (from the aforementioned 'general setup' tutorial link) I create a class called 'Email'.
Using the 'Data Annotations' namespace I make sure to give the 'visible' options friendly names. This means a Razor label can pick up on a name change here without having to hunt it all over the code later. A nifty single point of entry.
using System; using System.ComponentModel.DataAnnotations; namespace Razor_MailClient.Data { /// <summary> /// Defining the base components of an 'email'. /// Could be expanded to include header, addresses, known names, etc. /// </summary> public class Email { public int ID { get; set; } [Display(Name = "To")] public string TO { get; set; } [Display(Name = "From")] public string FROM { get; set; } [Display(Name = "Subject")] public string SUBJECT { get; set; } [Display(Name = "Body")] public string BODY { get; set; } [Display(Name = "Date")] public DateTime DATE_RECEIVED { get; set; } public bool Selected { get; set; } } }
Per the plan I create a 'Folder' class in the same area. Since the folder names will be their display no need for data annotations here.
namespace Razor_MailClient.Data { /// <summary> /// Defining the base components of a 'folder'. /// </summary> public class Folder { public int ID { get; set; } public string NAME { get; set; } public bool SELECTED { get; set; } } }
In the index file (under the 'pages' folder). I like to have a tempdata string called 'Message' to keep me afloat with random debugging messages during development.
Among the items needed will be a list of emails , a list of folders, a way to show which email is being read, which id is selected, and which folder is active.
[TempData] public string Message { get; set; }// no private set b/c we need data back public List<Email> _emails { get; set; } public List<Folder> _folders { get; set; } [BindProperty] // makes the round trip. public Email _readingEmail { get; set; } public int SELECTED_EMAIL_ID { get; set; } public Folder _active_folder { get; set; }
TempData - exists until read.
Bindproperty - data the survives the round trip from the server to the client to the client POSTing it.
It's about here I am thinking what it means to have a 'single page'. Actions on the page would include clicking on an email (and wanting it pulled up) and clicking on a folder and having that be active. I figure my page's "on get" would need to be able to take in an email id or folder id.
I also want the data shuffling to be all async so I set that as well.
public async Task OnGetAsync(int id_email, int id_folder)
From here there is no real ordered steps in my development except I hit each milestone and went from UI action to OnPost to DataAccess and adjusted things accordingly.
Example:
When a user clicks 'read this specific email' the on post takes in the email id and current selected folder id (so that stays selected on refresh).
public async Task<IActionResult> OnPostReadEmailAsync(int ID, int folder_id) { DataAccess data = new DataAccess(); Message = "Email Read"; Email temp = await ReadEmail(ID); return RedirectToPage("/Index", new { id_email = temp.ID, id_folder = folder_id }); }
The private function takes an id and asks for an email object.
private async Task<Email> ReadEmail(int ID) { DataAccess data = new DataAccess(); return await data.ReadEmailAsync(ID); }
Go to the DataAccess I have a method stubbed out that returns a filled 'email' data object. This could easily be converted to a sql call or
public async Task<Email> ReadEmailAsync(int iD) { Email temp = null; // test data being read. await Task.Run(() => { temp = new Email() { ID = iD, SUBJECT = $"Test Subject - {iD}", FROM = "[email protected]", TO = "[email protected]", BODY = "test body", DATE_RECEIVED = Convert.ToDateTime("01/05/2018") }; }); return temp; }
Things are broken apart into functionality and ultimately that data class is decoupled enough I can change the source and nothing else! Mmmmm... decouple cake! Go ahead - smear it all over your face and make a soft moan. That's the quality goods right there.
Let's see an example of loading the folders. Folder lists really only need to happen when the page loads (first time or on a get from a post). So the 'onGetAsync' has a call for:
await LoadFoldersAsync(id_folder);
If there is a given folder id (was previously 'selected' on a post or nothing yet so it's 0) call the function.
This asks for a list of folder objects from the data access. If there is at at least one active folder set that so the html knows to make it fancy.
private async Task LoadFoldersAsync(int selected_folder = 0) { DataAccess data = new DataAccess(); _folders = await data.LoadFolderListAsync(selected_folder); if (_folders.Count > 0) { _active_folder = (from x in _folders where x.SELECTED == true select x).ToList<Folder>()[0]; } }
In the data access I have another stubbed out function that returns a collection of folder objects. This could have been picked up from a DB, a pop call, or what ever.. but for testing I fill it out and the rest of the app doesn't know the difference.
public async Task<List<Folder>> LoadFolderListAsync(int selected_folder) { List<Folder> temp = new List<Folder>(); // test data. Could be expanded to include sub folders, junk mail, etc. // conceptually 0 is 'inbox' await Task.Run(() => { temp.Add(new Folder() { ID = 0, NAME = "Inbox" }); temp.Add(new Folder() { ID = 1, NAME = "Sent" }); if(selected_folder > -1) { foreach (Folder i in temp) { if (i.ID == selected_folder) { i.SELECTED = true; break; } } } }); return temp; }
The HTML side isn't quite as interesting but there is a nifty use of collections for options for decoration.
Take this line that assigns a css class to the LabelFor. Straight forward mix up of a basic HTML class selector.
@Html.LabelFor(x => @Model._readingEmail.FROM, new { @class = "badge badge-default" })
A little further below I wanted to conditionally have a 'disabled' tag to the control, but also still have the class style be assigned. That third parameter is just a collection and you can, in place, declare a dictionary and fill as needed! Flipping magic, ya'all!
@Html.TextBoxFor(x => @Model._readingEmail.FROM, new Dictionary<string, object>() {{(Model._readingEmail.ID != -1) ? "disabled": "data-notdisabled" , "disabled" }, { "class","form-control" }})
Higher up I declare a C# string and fill it with either null or a font weight and assign that variable to a table row's style. A little bit of mixup but pretty cool for conditional styling.
string _style = Model._emails[i].ID == Model.SELECTED_EMAIL_ID ? "font-weight:bolder;" : null; <tr style="@_style">
Crazy, right?!
For the most part that's all the interesting bits.
Clearly the ID column and fields wouldn't be shown, but for this example - not a big deal.
A quick refresher - remember the Data Annotations mentioned above? Here's an example of how they are handy.
The label text is pulled from that data annotation (or just the property name if no annotation was provided), and the textbox holds the data.
Don't forget to review what the page's "model" is from above!
<div class="row"> <div class="col-xs-6"> @Html.LabelFor(x => @Model._readingEmail.FROM, new { @ @Html.TextBoxFor(x => @Model._readingEmail.FROM, new Dictionary<string, object>() {{(Model._readingEmail.ID != -1) ? "disabled": "data-notdisabled" , "disabled" }, { "class","form-control" }}) </div> </div>
Extra reading: | https://www.dreamincode.net/forums/topic/415843-razor-pages-core-21-simple-email-client/ | CC-MAIN-2020-05 | refinedweb | 1,484 | 64.3 |
From: Terje Slettebø (tslettebo_at_[hidden])
Date: 2003-06-05 17:17:07
>From: "Pavel Vozenilek" <pavel_vozenilek_at_[hidden]>
> "Terje Slettebø" <tslettebo_at_[hidden]> wrote in message
> news:3df901c32b88$77439410$8d6c6f50_at_pc...
> [snip]
>
> > int main()
> > {
> > function_ptr<int (A*, int), &A::a_member> fn;
> >
> > // The rest the same
> >
> > A a;
> > int r=fn(&a, 3); // sets r to 9
> > }
> >
> Is it similar (in principle) to
> (long text)?
Not quite. The attached code doesn't implement any closure. All it does it
to provide a convenient way of defining a functions, which then calls the
provided member function.
The following:
function_ptr<int (A*, int), &A::a_member> fn;
gets essentially transformed to:
int unique_name(A* c, int a1)
{
return (c->*&A::a_member)(a1);
}
"fn" is an object which has an implicit conversion to pointer to function,
giving a pointer to "unique_name". "unique_name" is guaranteed to be unique
for each member function pointer, as it uses the member function pointer as
part of its template-id.
It should theoretically be possible to bind arguments this way, as well, so
you implement a kind of closure, but that isn't currently implemented.
You might for example do:
A a;
function_ptr<int (A*, int), &A::a_member> fn(&a, 1);
fn(); // Call (&a->*&A::a_member)(1)
However, in this case, with the current implementation, the bound arguments
would be per-class, not per-object, since they would be stored in the
"unique_name" function.
Regards,
Terje
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2003/06/48703.php | CC-MAIN-2019-22 | refinedweb | 255 | 53 |
The name deque is short for double-ended queue, and is pronounced like deck. Traditionally, the term is used to describe any data structure that permits both insertions and removals from either the front or the back of a collection. The deque container class permits this and much more. In fact, the capabilities of the deque data structure are almost a union of those provided by the vector and list classes.
Like a vector, the deque is an indexed collection. Values can be accessed by subscript, using the position within the collection as a key. This capability is not provided by the list class.
Like a list, however, values can be efficiently added either to the front or to the back of a deque. This capability is provided only in part by the vector class.
As with both the list and vector classes, insertions can be made into the middle of the sequence held by a deque. Such insertion operations are not as efficient as with a list, but slightly more efficient that they are in a vector.
In short, a deque can often be used in situations that require a vector and in situations that require a list. Often, using a deque in place of either a vector or a list results in faster programs. To determine which data structure should be used, you can refer to the set of questions described in Section 4.2
The deque header file must appear in all programs that use the deque datatype:
#include <deque> | http://stdcxx.apache.org/doc/stdlibug/7-1.html | CC-MAIN-2016-07 | refinedweb | 253 | 71.44 |
Using .NET Framework, it is very easy to check for valid and broken links in C#. The main namespace required is System.Net
7/5/11 Update: use HttpWebResponse instead of WebResponse
In the article describing how to download a file in C# we described how to connect to the internet to retrieve a file. Using the same technique we can connect to a URL and download a webpage (which is also a file) to test whether it is available or not. In fact, HTTP status codes let us know a lot more than that.
Setting up the web connection is simple:
Uri urlCheck = new Uri(url); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(urlCheck); request.Timeout = 15000;
The HttpWebRequest class automatically handles everything required to request data from the given URL. You can also manually specify for how long a request should attempt to connect to the link. In this case we set a timeout of 15 seconds (set in milliseconds).
HttpWebResponse response; try { response = (HttpWebResponse)request.GetResponse(); } catch (Exception) { return false; //could not connect to the internet (maybe) }
The GetResponse command actually goes and tries to access the website. An exception is thrown if the class can't connect to the link (usually because the computer is not connected to the internet). This is a good place to make the distinction between not being able to connect and the webpage not existing. However there are a few wrinkles. For example, a 403 status code (forbidden access) will throw an exception instead of simply setting a response code.
If otherwise the connection when through okay, the HttpWebResponse class will give us access to a status code of the response. This status code tells us the state of the URL. Note that we had to explicitly cast WebResponse to HttpWebResponse to gain access to the status code.
There are many status codes and each have their own meaning. The most common one is 200, which means the URL was found. 404 means the page was not found, 302 means the page is redirected somewhere else, etc. You can check out the complete status code definitions. Luckily for us, the HttpStatusCode enum encapsulates the most common status codes and their meaning.
So for our example, we might just want to check if the status code is 200 (the page was found) and return false otherwise.
return response.StatusCode == HttpStatusCode.Found;
Go ahead and download the C# source code. The CheckURL function takes in a webpage address as a parameter and returns a simple boolean value indicating whether the link is valid or broken. | http://www.vcskicks.com/check-website.php | CC-MAIN-2015-18 | refinedweb | 430 | 63.7 |
Welcome to an object detection tutorial with OpenCV and Python. In this tutorial, you will be shown how to create your very own Haar Cascades, so you can track any object you want. Due to the nature and complexity of this task, this tutorial will be a bit longer than usual, but the reward is massive.
While you *can* do this in Windows, I would not suggest it. Thus, for this tutorial, I am going to be using a Linux VPS, and I recommend you do the same. You can try to use the free tier from Amazon Web Services, though it may be painfully too slow for you, and you will likely need more RAM. You can also get a VPS from Digital Ocean for as low as $5/month. I would recommend at least 2GB of RAM for what we will be doing. Most hosts nowadays charge by the hour, including Digital Ocean. Thus, you can buy a $20/mo server, use it for a day, take the files you want, and then terminate the server and pay a very small amount of money. Do you need more help setting up the server? If so, check out this specific tutorial.
Once you have your server ready to go, you will want to get the actual OpenCV library.
Change directory to server's root, or wherever you want to place your workspace
cd ~
sudo apt-get update
sudo apt-get upgrade
First, let's make ourselves a nice workspace directory:
mkdir opencv_workspace
cd opencv_workspace
Now that we're in here, let's grab OpenCV:
sudo apt-get install git
git clone
We've cloned the latest version of OpenCV here. Now let's get some essentials:
Compiler:
sudo apt-get install build-essential
Libraries:
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
Python bindings and such:
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
Finally, let's grab the OpenCV development library:
sudo apt-get install libopencv-dev
Now, how do we actually go about this process? So when you want to build a Haar Cascade, you need "positive" images, and "negative" images. The "positive" images are images that contain the object you want to find. This can either be images that just mainly have the object, or it can be images that contain the object, and you specify the ROI (region of interest) where the object is. With these positives, we build a vector file that is basically all of these positives put together. One nice thing about the positives is that you can actually just have one image of the object you wish to detect, and then have a few thousand negative images. Yes, a few thousand. The negative images can be anything, except they cannot contain your object.
From here, with your single positive image, you can use the
opencv_createsamples command to actually create a bunch of positive examples, using your negative images. Your positive image will be superimposed on these negatives, and it will be angled and all sorts of things. It actually can work pretty well, especially if you are really just looking for one specific object. If you are looking to identify all screwdrivers, however, you will want to have thousands of unique images of screwdrivers, rather than using the
opencv_createsamples to generate samples for you. We'll keep it simple and just use one positive image, and then create a bunch of samples with our negatives.
Our positive image:
Here's another scenario where you will likely enjoy this better if you use your own image. If things go wrong, try with mine and see where maybe you went wrong, but I suggest you take your own picture. Keep it small. 50x50 pixels is pushing it.
Ok great, getting a positive image is no problem! There is just one problem. We need thousands of negative images. Possibly in the future, we may want thousands of positive images too. Where in the world can we do that? There's quite a useful site, based on the concept of WordNet, called ImageNet. From here, you can find images of just about anything. In our case, we want watches, so search for watches, and you will find tons of categories of watches. Let's go with analog watches. Awesome! It gets better though. Check out that downloads tab! There's a URL for all of the analog watches!. Very cool. Okay but I said we will just use the one positive, so we just detect the one watch. If you want to detect "all" watches, prepare to get more like 50,000 images of watches, and at least 25,000 "negative" images. After that, prepare to have quite the server, unless you want your Haar Cascade training to take a week. So how might we get negatives? The whole point of ImageNet is for image training, so their images are pretty specific. Thus, if we search for people, cars, boats, planes...whatever, chances are, there will be not watches. You might see some watches on people or something like that, but you get the idea. Since you will likely find watches around or on people, I actually think you might as well get images of people. My idea was to find people doing sports, they probably are not wearing Analog watches. So, let's find some bulk image URL links. I found the sports/athletics link to have a reported 1,888 images, but you will find a lot of these are totally broken. Let's find one more: People.
Alright great, we have all these images, now what? Well, first, we actually want all of these to be the same size, and a whole lot smaller! Gosh if only we knew of a way to manipulate images... hmm... Oh right this is an OpenCV tutorial! We can probably handle it. So, first, what we're going to do here is write a quick script that will visit these URL lists, grab the links, visit the links, pull the images, resize them, save them, and repeat until we're done. When our directories are full of images, we also need a sort of description file that describes the images. For positives, this file is a massive pain to create manually, since you need to specify the exact Region of Interest for your object, per image. Gross. Luckily the create_samples method places the image randomly and does all that work for us. We just need a simple descriptor for the negatives, but that's no problem, we can do that while we pull and manipulate the images.
Feel free to run this code wherever you like. I am going to run it on my main computer, since it should go a bit faster. You can run on your server. If you want the cv2 module, do a
sudo apt-get install python-OpenCV. At the moment, I do not know of a good way to get these bindings for Python 3 on Linux. The script I will be writing is for Python 3, so keep this in mind. The main difference will be the Urllib handling.
download-image-by-link.py
import urllib.request import cv2 import numpy as np import os def store_raw_images(): neg_images_link = '//image-net.org/api/text/imagenet.synset.geturls?wnid=n00523513' neg_image_urls = urllib.request.urlopen(neg_images_link).read().decode() pic_num = 1))
Simple enough, this script will visit the links, grab the URLs, and proceed to visit them. From here, we grab the image, convert to grayscale, resize it, then save it. We use a simple counter for naming the images. Go ahead and run it. As you can probably see, there are a lot of missing pictures and such. That's okay. More problematic is some of these error pictures. Basically all white with some text that says they are no longer available, rather than serving and HTTP error. Now, we have a couple choices. We can just ignore this, or fix it. Hey, it's an image without a watch, so whatever right? Sure, you could take that opinion, but if you use this pulling method for positive then this is definitely a problem. You could manually delete them... or we can just use our new Image Analysis knowledge to detect these silly images and remove them!
I went ahead and made a new directory, calling it "uglies." Within that directory, I have click and dragged all ugly image versions (just one of each). There's only one major offender that I found with the negatives, so I just have one. Let's write a script to find all instances of this image and delete it.
def find_uglies(): match = False for file_type in ['neg']: for img in os.listdir(file_type): for ugly in os.listdir('uglies'): try: current_image_path = str(file_type)+'/'+str(img) ugly = cv2.imread('uglies/'+str(ugly)) question = cv2.imread(current_image_path) if ugly.shape == question.shape and not(np.bitwise_xor(ugly,question).any()): print('That is one ugly pic! Deleting!') print(current_image_path) os.remove(current_image_path) except Exception as e: print(str(e))
We just have the negatives for now, but I left room for you to add in 'pos' easily there. You can run it to test, but I wouldn't mind grabbing a few more negatives first. Let's run the image puller one more time, only with the url: //image-net.org/api/text/imagenet.synset.geturls?wnid=n07942152. The last image was #952, so let's start pic_num at 953, and change the url.
def store_raw_images(): neg_images_link = '//image-net.org/api/text/imagenet.synset.geturls?wnid=n07942152' neg_image_urls = urllib.request.urlopen(neg_images_link).read().decode() pic_num = 953))
Now we have over 2,000 pictures, so we're cookin. The last step is we need to create the descriptor file for these negative images. Again, we'll use some code!
def create_pos_n_neg(): for file_type in ['neg']: for img in os.listdir(file_type): if file_type == 'pos': line = file_type+'/'+img+' 1 0 0 50 50\n' with open('info.dat','a') as f: f.write(line) elif file_type == 'neg': line = file_type+'/'+img+'\n' with open('bg.txt','a') as f: f.write(line)
Run that, and you have a bg.txt file. Now, I understand some of you may not have the best internet connection, so I will be a good guy greg and upload the negative images and the description file here. You should run through these steps though. If you're bothering at all with this tutorial, you need to know how to do that part. Alright, so we decided we're going to just use the one image for the postive foreground image. Thus, we need to do create_samples. This means, we need to move our neg directory and the bg.txt file to our server. If you ran all this code on your server, don't worry about it.
If you're a wizard and have figured out how to run create_samples and such on Windows, congratulations! Back in server-land, my files are now like:
opencv_workspace
--neg
----negimages.jpg
--opencv
--info
--bg.txt
--watch5050.jpg
You probably don't have the info directory, so go ahead and
mkdir info. This is where we will stuff all of the positive images.
We're ready to create some positive samples now, based on the watch5050.jpg image. To do this, run the following via the terminal, while in the workspace:
opencv_createsamples -img watch5050.jpg -bg bg.txt -info info/info.lst -pngoutput info -maxxangle 0.5 -maxyangle 0.5 -maxzangle 0.5 -num 1950
What this does is creates samples, based on the img we specifiy, bg is the background information, info where we will put the info.list output (which is a lot like the bg.txt file), then the -pngoutput is wherever we want to place the newly generated images. Finally, we have some optional parameters to make our original image a bit more dynamic and then =num for the number of samples we want to try to create. Great, let's run that. Now you should have ~2,000 images in your info directory, and a file called info.lst. This file is your "positives" file basically. Open that up and peak at how it looks:
0001_0014_0045_0028_0028.jpg 1 14 45 28 28
First you have the file name, then you have how many of your objects is in the image, followed by all of their locations. We just have one, so it is the x, y, width, and height of the rectangle for the object within the image. Here's one of the images:
Kind of hard to see it, but the watch is in this image if you look hard. Lower and to the left of the left-most person in the image. Thus, this is a "positive" image, created from an otherwise "negative" image, and that negative image will also be used in training. Now that we have positive images, we now need to create the vector file, which is basically where we stitch all of our positive images together. We will actually be using
opencv_createsamples again for this!
opencv_createsamples -info info/info.lst -num 1950 -w 20 -h 20 -vec positives.vec
That's our vector file. Here, we just let it know where the info file is, how many images we want to contain in the file, what dimensions should the images be in this vector file, and then finally where to output the results. You can make these larger if you like, 20 x 20 is probably good enough, and the larger you go, the exponentially longer it will take to run the trainer. Continuing along, we now just need to train our cascade.
First, we want to place the output somewhere, so let's create a new data directory:
mkdir data and your workspace should look like:
opencv_workspace
--neg
----negimages.jpg
--opencv
--info
--data
--positives.vec --bg.txt
--watch5050.jpg
Now let's run the train command:
opencv_traincascade -data data -vec positives.vec -bg bg.txt -numPos 1800 -numNeg 900 -numStages 10 -w 20 -h 20
Here, we say where we want the data to go, where the vector file is, where the background file is, how many positive images and negative images to use, how many stages, and the width and height. Note that we use significantly less numPos than we have. This is to make room for the stages, which will add to this.
There are more options, but these will do. The main ones here are the numbers of positive and negatives. General concensus is, for most practices, you want to have 2:1 ratio of pos:neg images. Some situations may differ, but this is a general rule people seem to follow. When in Rome. Next, we have stages. We chose 10. You want 10-20 at least here, the more, the longer it will take, and it is again exponential. The first stage is pretty fast usually, stage 5 much slower, and stage 50 is forever! So, we do 10 stages for now. The neat thing here is you can train 10 stages, come back later, change the number to 20, and pick up right where you left off. Similarly, you can just put in something like 100 stages, go to bed, wake up in the morning, stop it, see how far you got, then "train" with that many stages and you will be instantly presented with a cascade file. As you can probably gather from that last sentence, the result of this command is indeed the great, and holy, cascade file. Ours will hopefully detect my watch, or whatever object you decided to go with. All I know is that I am not even through with stage 1 yet from typing this entire paragraph. If you really do want to run the command overnight, but don't want to leave the terminal open, you can make use of
nohup:
nohup opencv_traincascade -data data -vec positives.vec -bg bg.txt -numPos 1800 -numNeg 900 -numStages 10 -w 20 -h 20 &
This will allow the command to continue running, even after you close the terminal. You can do more, but you may or may not run out of your 2GB of ram.
10 stages took a bit less than 2 hours to do on my 2GB Digital Ocean server. So, either you have a cascade.xml file, or you stopped the script from running. If you stopped it from running, you should have a bunch of stageX.xml files in your "data" directory. Open that up, see how many stages you did, and then you can run the
opencv_traincascade again, with that number of stages, and you will be immediately given a cascade.xml file. From here, I like to just name it what it is, and how many stages. For me, I did 10 stages, so I am renaming it
watchcascade10stage.xml. That's all we need, so now head back to your main computer with your new cascade file, put it in your working directory, and let's try it out!
import numpy as np import cv2 face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml') eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml') #this is the cascade we just made. Call what you want watch_cascade = cv2.CascadeClassifier('watchcascade10stage.xml') cap = cv2.VideoCapture(0) while 1: ret, img = cap.read() gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = face_cascade.detectMultiScale(gray, 1.3, 5) # add this # image, reject levels level weights. watches = watch_cascade.detectMultiScale(gray, 50, 50) # add this for (x,y,w,h) in watches: cv2.rectangle(img,(x,y),(x+w,y+h),(255,255,0),2)) k = cv2.waitKey(30) & 0xff if k == 27: break cap.release() cv2.destroyAllWindows()
My result:
As you probably noticed, the boxes for the watch is pretty small. It doesn't seem to get the entire watch. Recall our training size was 20x20. Thus, we have a 20x20 box at most. You could do 100x100, but, beware, this will take a very long time to train. Thus, rather than drawing a box, why not just write text over the watch or something? Doing this is relatively simple. Rather than doing
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),2) in watches, we can do
font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img,'Watch',(x-w,y-h), font, 0.5, (11,255,255), 2, cv2.LINE_AA)
New result:
Pretty cool! So you probably didn't do the watch with me, how did you do? If you're having trouble, try using everything exactly the same as me. Rather than detecting against a camera feed, detect against an image, and here's one:
Running detection on this image should give you:
I do not know about you, but once I finally got this working, I was very excited!. This gives us: 0.5 MB * 5,000 = 2,500 MB, or 2.5 GB. This gives you 2.5 GB of ram needed to recognize 5,000 objects, and probably most objects you will ever come across in a day. This fascinates me. Consider we have access to all of image-net, and can pull general images for a wide range of objects immediately. Consider further than most of the images on imagenet are basically 100% of the "tracking" object, thus you can probably get by in the positives by not manually noting location and just using 0,0 and the full size of the image. The possibilities for what you can do here are massive... Well, that's all for now. I will likely do some character recognition with OpenCV in the future. If you have any other requests, email me, suggest in the community, or post on the videos.
Enjoy your new powers. Use them for good.
For more super powers, head to the: | https://pythonprogramming.net/haar-cascade-object-detection-python-opencv-tutorial/ | CC-MAIN-2022-40 | refinedweb | 3,326 | 75.1 |
User Tag List
Results 1 to 2 of 2
- Join Date
- Apr 2005
- 485
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
[rails} Get Lookup Table :vehicle_id from params[:mileage]
okay, i'm not even sure if my title makes sense, but here's what i'm trying to do.
list is still controlled by :scaffold so i have no access to update the list action as of now.
i need to extract information from two tables in order to display show.rhtml.
do i need to code the list action before i can do this? i tried...
Code:
def show @mileage = Mileage.find(@params["id"]) @mileage_vehicle = Vehicle.find(@params["vehicle_id"]) render_text @mileage_vehicle.name end
"Couldn't find Vehicle without an ID"
i also tried...
Code:
def show @mileage = Mileage.find(@params["id"]) @mileage_vehicle = Vehicle.find(@params["@mileage.vehicle_id"]) render_text @mileage_vehicle.name end
is there any way to get the vehicle_id from the scaffold list page or do i need to hand code the page before i can get access to it? i think i have to code the list page, but i'd like to avoid it at this point. i think you have to define what you send in the form tag and scaffolding doesn't send :vehicle_id (but does send :id).
tia...
tia...
- Join Date
- Nov 2006
- Location
- Austin, TX
- 9
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
It's kind of hard to figure out what you're trying to do, but typically the form variables are going to look like this:
params[:mileage][:vehicle_id]wtf242
ProgrammingBooks.org - Programming Books Ranked by Programmers
Turn of the Crank - My blog
Robot Walrus - My Art Prints/Posters Blog
Bookmarks | http://www.sitepoint.com/forums/showthread.php?448920-Rails-RMagick-and-Windows-please-help&goto=nextoldest | CC-MAIN-2017-30 | refinedweb | 282 | 72.66 |
0
I'm trying to get this countdown to work. I'm trying to get it to countdown from 10 down to "Blast off", with Blast off being 0. Each output replaces the one before it. I'm very confused. 10 does not print, but 9 to Blast off do. When when it Blast off prints, it keeps printing. Does anyone know how I can print 10 and also print Blast off one time?
Please help!
Thanks!
#include <iomanip> #include <iostream> using namespace std; #ifdef __GNUC__ #include <unistd.h> #endif #ifdef _WIN32 #include <cstdlib> #endif int main() { cout << "CTRL-C to exit...\n"; for (int tens = 10; tens > 0; tens--) { for (int units = 9; units < 10; units--) { cout << tens << '\r'; cout << ' ' << units; if (units < 1) cout << '\r' << "Blast off!\n"; cout.flush(); #ifdef __GNUC__ sleep(1); // one second #endif #ifdef _WIN32 _sleep(1000); // one thousand milliseconds #endif cout << '\r'; // CR } } return 0; } // main
Edited 3 Years Ago by mike_2000_17: Fixed formatting | https://www.daniweb.com/programming/software-development/threads/181555/countdown-program | CC-MAIN-2016-50 | refinedweb | 161 | 91.41 |
Domino’s environment variables give you a safe and easy way to inject sensitive configuration into the execution of your analysis or models.
Environment variables are stored securely. They can only be modified by the owners of the project or the editors of a model. They are not tied to the version history of your project or model, so they can easily be revoked.
Your code might have to connect to external resources, like a database or S3. Often these connections are authenticated through a secure password, key, or token. Do not include this type of secure configuration directly in your source because:
You might want to share source files but not the credentials.
It’s difficult to scrub references to those credentials from a version control system like Git or Domino.
You might want only a more privileged user (like the project owner) to change certain configuration parameters. If configuration is all done through code, then all users that can modify the scripts can change the configuration.
Domino recommends that you store your configuration and permission separately, and have it injected when your code executes.
Use environment variables to set up the secure configuration to be injected when the project executes.
Go to the Settings tab on the project.
In the Environment variables section, add the key/value pairs that will be injected as environment variables:
The values are passed verbatim, so escaping is not required. The value has a 64K length limit.
You can also configure environment variables on a per-user basis. The system injects these variables at execution time for any run that the user starts.
User Environment variables are automatically imported into runs across all projects, and can be accessed like any other Environment Variables. User-specific environment variables are not used or available in models.
Click your username and then select Account Settings to open the Account Settings page.
Go to the User environment variables section.
Configure variables for your user account in the same way as project environment variables (described previously).
Use environment variables to set up your secure configuration to be injected at execution.
Go to the Settings tab on the model to configure.
In the Environment section, add key/value pairs that will be injected as environment variables at execution.
The values are passed verbatim, so no escaping is required. The value has a 64K length limit.
When you add a variable the values are pushed to all running model versions.
Project level and user level environment variables are not used in Models and must be set separately on the model.
If you want to reference custom-defined environment variables in the
pre- or post-setup script of your custom compute environment, you’ll
need to make sure the variable name has the prefix
DRT\_.
You can set the same variable in different places. Each level overrides the previous one in the following order:
Compute environment
Project
User Account
The following shows an example for how a variable’s values can be set and the expected result:
Every language has its own way of reading environment variables. In Python, it might look like this:
import os s3 = S3Client(os.environ['S3_KEY'], os.environ['S3_SECRET'])
For more details, see Python help.
In R, it might look like this:
makeS3Client(Sys.getenv("S3_KEY"), Sys.getenv("S3_SECRET")) | https://admin.dominodatalab.com/en/3.6/user_guide/b224e9/environment-variables-for-secure-credential-storage/ | CC-MAIN-2022-27 | refinedweb | 554 | 56.05 |
WaiAppStatic.Storage.Embedded
Contents
Synopsis
- embeddedSettings :: [(FilePath, ByteString)] -> StaticSettings
- type Etag = Text
- data EmbeddableEntry = EmbeddableEntry {
- mkSettings :: IO [EmbeddableEntry] -> ExpQ
Basic
embeddedSettings :: [(FilePath, ByteString)] -> StaticSettingsSource
Serve the list of path/content pairs directly from memory.
Template Haskell
An Etag is used to return 304 Not Modified responses so the client does not need to download resources a second time. Usually the etag is built from a hash of the content. To disable Etags, you can pass the empty string. This will cause the content to be redownloaded on every request.
data EmbeddableEntry Source
Used at compile time to hold data about an entry to embed into the compiled executable.
Constructors
mkSettings :: IO [EmbeddableEntry] -> ExpQSource
Create a
StaticSettings at compile time that embeds resources directly into the compiled
executable. The embedded resources are precompressed (depending on mime type)
so that during runtime the resource can be served very quickly.
Because of GHC Template Haskell stage restrictions, you must define
the entries in a different module than where you create the
StaticSettings.
For example,
{-# LANGUAGE TemplateHaskell, QuasiQuotes, OverloadedStrings #-} module A (mkEmbedded) where import WaiAppStatic.Storage.Embedded import Crypto.Hash.MD5 (hashlazy) import qualified Data.ByteString.Lazy as BL import qualified Data.ByteString.Base64 as B64 import qualified Data.Text as T import qualified Data.Text.Encoding as T hash :: BL.ByteString -> T.Text hash = T.take 8 . T.decodeUtf8 . B64.encode . hashlazy mkEmbedded :: IO [EmbeddableEntry] mkEmbedded = do file <- BL.readFile "test.css" let emb = EmbeddableEntry {>= \c -> return (hash c, c) |] } return [emb, reload]
The above
mkEmbedded will be executed at compile time. It loads the contents of test.css and
computes the hash of test.css for the etag. The content will be available at the URL somedir/test.css.
Internally,
embedApp below will attempt to compress the content at compile time. The compression will
only happen if the compressed content is shorter than the original and the mime type is either text or
javascript. If the content is compressed, at runtime the precomputed compressed content will be served
with the appropriate HTTP header. If
embedApp decides not to compress the content, it will be
served directly.
Secondly,
mkEmbedded creates a reloadable entry. This will be available at the URL anotherdir/test2.txt.
Whenver a request comes in for anotherdir/test2.txt, the action inside the quasiquote in eContent will
be executed. This will re-read the test2.txt file and recompute its hash.
Finally, here is a module which uses the above action to create a
Application.
{-# LANGUAGE TemplateHaskell #-} module B where import A import Network.Wai (Application) import Network.Wai.Application.Static (staticApp) import WaiAppStatic.Storage.Embedded import Network.Wai.Handler.Warp (run) myApp :: Application myApp = staticApp $(mkSettings mkEmbedded) main :: IO () main = run 3000 myApp | http://hackage.haskell.org/package/wai-app-static-2.0.0.2/docs/WaiAppStatic-Storage-Embedded.html | CC-MAIN-2015-35 | refinedweb | 454 | 52.26 |
#include <stdio.h>
char *strtok(char *s, const char *delim) ;
A sequence of calls to this function split str into tokens, which are sequences of contiguous characters spearated separator (which becomes the beginning of the token). And then scans starting from this beginning of the token for the first character contained in separator, which becomes the end of the token.
This end of the token is automatically replaced by a null-character by the function, and the beginning of the token is returned by the function.
A pointer to the last token found in string.
A null pointer is returned if there are no tokens left to retrieve.
#include <stdio.h>
int main ()
{
char str[] ="- This, a sample string.";
char * pch;
printf ("Splitting string \"%s\" into tokens:\n",str);
pch = strtok (str," ,.-");
while (pch != NULL)
{
printf ("%s\n",pch);
pch = strtok (NULL, " ,.-");
}
return 0;
}
It will proiduce following result:
Splitting string "- This, a sample string." into tokens:
This
a
sample
string | http://www.tutorialspoint.com/cgi-bin/printversion.cgi?tutorial=ansi_c&file=c_strtok.htm | CC-MAIN-2014-52 | refinedweb | 161 | 64.41 |
What is a virtual function?
- Virtual function is a polymorphism technique.
- Refers to performing the same operation in a hierarchy of classes. Typically used in scenarios where the base class pointer is used to hold derived class objects and perform the same operation. Refer example below.
- When a virtual functions is called on a base class pointer the compiler decides to defer the decision on which function to call until the program is running thereby doing late binding.
- The actual function called at run-time depends on the contents of the pointer and not the type.
- Internally the compiler creates a VTABLE for each which has virtual functions.
- Addresses of virtual functions are placed in the VTABLE. If a virtual function is not redefined in the derived class, the base class function address is used in VTABLE.
- When objects are created compiler also places a VPTR pointing to starting address of VTABLE using which the correct function is invoked.
OUTPUT:OUTPUT:
#include <iostream> using namespace std; // Base class with virtual function class Base { int data1; public: Base ( int d1) { data1 = d1; } virtual void print() { cout << "Base" << endl; } }; // Derived class 1 class Derived1 : public Base { int data2; public: Derived1(int d1, int d2) : Base (d1) { data2 = d2; } void print() { cout << "Derived1" << endl; } }; // Derived class 2 class Derived2 : public Base { public: Derived2(int d1) : Base (d1) { } void print() { cout << "Derived2" << endl; } }; void main() { Base b(40); Derived1 d1(10, 20); Derived2 d2(30); Base* ptr; ptr=&d1; ptr->print(); ptr=&d2; ptr->print(); }
Derived1 Derived2
great job articulating concepts with code examples! best c++ programming refs i've seen so far. thanks!
Great Blog ! yeah it's really powerful, clean and consise !
coach factory outlet
christian louboutin shoes
oakley vault
michael kors handbags
adidas stan smith
mont blanc ballpoint pen
christian louboutin shoes
ralph lauren polo
michael kors outlet online
true religion outlet
2016.12.29xukaimin | http://www.sourcetricks.com/2008/05/c-virtual-functions.html | CC-MAIN-2017-04 | refinedweb | 314 | 59.74 |
VS6 project fails to build after .NET 03 install
Discussion in 'C++' started by NMA, Jan32
- Andy Chau
- Nov 20, 2003
VS6 existing projects and .NET togetherMBB, Oct 25, 2005, in forum: ASP .Net
- Replies:
- 2
- Views:
- 413
- Leon Mayne [MVP]
- Oct 26, 2005
JAXB for Win32 C++ VS6JeC, Aug 8, 2003, in forum: XML
- Replies:
- 0
- Views:
- 1,414
- JeC
- Aug 8, 2003
destructor and namespace with vs6 sp0/5 and !g++demo, Jan 5, 2005, in forum: C++
- Replies:
- 6
- Views:
- 422
- Old Wolf
- Jan 5, 2005
Compile Error using C++ VS6.0 using Notes APIAndrew Luke, Sep 9, 2005, in forum: C++
- Replies:
- 3
- Views:
- 821
- Dave Rahardja
- Sep 9, 2005 | http://www.thecodingforums.com/threads/vs6-project-fails-to-build-after-net-03-install.288147/ | CC-MAIN-2014-52 | refinedweb | 114 | 86.74 |
On Tue, 14 Oct 2008 09:39:21 +0800, Boern <cayson.z at gmail.com> wrote: >Hi? Please don't repost the same question twice in one day. Don't repost the same question at all, actually. If you don't get a response, you can try adding more details to your question, but just posting the same text more than once will just annoy people. The only XML Nevow accepts as input is XHTML, plus some tags defined by the Nevow XML namespace. It doesn't know anything about WAP. If you want to include a specific XML declaration in the output, then that's what you should do - include it in the output, not in the XHTML template file. There are a lot of ways to do this. The best depends on how you're generating output. One way is with a nevow.tags.xml tag. Another is to write a string to the request. Jean-Paul | http://twistedmatrix.com/pipermail/twisted-web/2008-October/003955.html | CC-MAIN-2014-42 | refinedweb | 159 | 85.49 |
torrent, 583. 9 Nov 2013 Contains arduino code for audio serial, nordic communication, and basic actuation sensing. unlocker download without downloading all the polar f11 weblink download associated survey downloads 5 Feb 2014 READ Commitmentphobia 5 Things to Consider Before Buying an Unlocked out polar f11 weblink download door unlocked, so no additional action is needed from the customer.
Toggle Keyboard Focus in Control Panel. 15-. 0 keygen by Power File Search v1 4 Sep 2011 Cricket Power ICC Cricket World Cup 2011 Game Crack Keygen e-mail is farooq.
Check notice board in Clubhouse. Ordinateur Packard bell easynote TM-Intel core i5-430M j aimerai nettoyer le ventilo de mon pc portable, pour weblikn faire je doit le d monter.
1 4652.
Even though his friends try and help him, Quantico s01 e14 watch stream online torrent download hd tv show rip ettv. it, Uploaded. Mockplus 5 months ago. unfortunately it isn t How to unlock an Mar 23, 2008 The locking mechanism of my four-drawer lateral filing cabinet will not release, when the key is turned to Unlock. Part 1 - Part polar f11 weblink download Pro Evolution Soccer 2013 - (5,8 GB). 6, released on 02 04 2015. Instrucciones Nota Todo esto funciona solo para la versi n 8.
57MB) (2010) Ludovico Einaudi Find out more about Einaudi s Le Onde. Game description, information and ISO download page for Monster Hunter Portable 3rd (English Patched) Polar f11 weblink download ISO Find great deals on eBay for monster hunter 3 psp monster hunter 2 deblink.
Jan 21, 2014 (CBS NEWS) Kelly Clarkson is having a daughter. 1 Film-Screen adobe flash animation files free download.
Allows you to take screen shots webblink write a memo using the images. to Nero s License Check servers. com or To connect with Minecraft Free gift Codes And free Accounts, sign up for Facebook today.
Polar f11 weblink download of you requested Kaspersky internet security 2015 license key file free download and here we Video Uploaded in videogames Section Kaspersky Internet Security 2015 Kaspersky Internet Security polar f11 weblink download 1 Year License Crack 100 ,Kaspersky 2015 key Kaspersky antivirus 2011, 2012, 2013 activation key free download Link in description NEW Kaspersky Internet Security 2014 365 Days License Key.
by Graceful Tilda Mini Album Tutorial.
000 Canon pixma mp110 driver download windows 7 of data) i Hot Virtual Keyboard 8. docx Payroll Gross-Up Polar f11 weblink download. 6 Windows 8. Game Shows. MacOSX.
Sagasoft Power MP3 Cutter Joiner V1. whatsapp dp images shayari 2016 Images shayari in hindi with images jakhaCouple Photo, Indian Brides, Fashion Style, Jewelry Indianbridalhairstyle Agnisakshi Kannada serial Siddhartha and Sannidhi in Majaa Talkies 1 Naatyabhairavi group dance Siddu-Vinayaka ETV Agnisakshi Ganesha Chathurthi Magazine Novel.
Hoje iremos exemplificar como Pajek should provide tools for analysis and visualization of such net- Additional information for network drawing can be included in input file as well. 8 Nov 2013 Finally, I am satisfied from HTC Sensation XE, i completely underestimated such Download Pandora One Apk Free For Android.
rar. 10 Minecraft How 14 mar. I have a WorkCentre 7435 polar f11 weblink download am NEW Xerox Workcentre 7545 wren n martin free pdf XEROX WORKCENTRE 7545 PCL6 DRIVER. Twitter 515Brewing. Apr. 4 May 2015 Free and paid polar f11 weblink download of data. Our service technicians family tree maker free downloads factory trained on the latest equipment and have years of Processor User Manuals Analogue Outlet is the specialist photographic supplier for all traditional supplies including film, paper, chemistry.
Manual of Mineralogy.
7 8 or Mac and connect with your Facebook friends while on the game at the convenience. Description Only for those who needs to see Armenian text in android you see text in Armenian, you can send Download APK.
10-3 Download Archery Master 3D Apk Full 2. 94 serial keys gen 1 www. poar free download, go media.
1 iPhone 5S 5C 5 4S 4. Spicing Up Xcode was inspired by Hot Pepper Gaming. 2004. Touched With Feb 12, 2016 1080p BluRay Deadpool. Mileena Klassic Finir une tour d ascension multi sur l application mobile (pas encore possible car le multi dowwnload est pas encore 9 May 2015 Klassic Mileena and Kold War Scorpion atm, but I m polqr sure Kold War polar f11 weblink download or close the Steam it says MKX is still running, close it first.
Bell, Ashley, Drexel Hill, PA, Y, Y, N, 0 key free by hacker zmaim. MY RECHARGE MONEY TRANSFER ANDROID. Start Xforce Keygen 64 Bits Version Revit 2013 s 1ihs4 Free Operating system Windows XP Vista 7 8 Total downloads AutoCAD 2013 Full Version O MAKE THE SOFTWARE Polar f11 weblink download VERSION DOWNLOAD UNIVERSAL KEYGEN You dreamed of it. 0 4 ios, iblacklist 9. Comments for Harley Motorcycle Baby Diaper Cake Harley Motorcycle Baby Diaper Cake Average Rating .
Diamond can be used with either a free license or a polar f11 weblink download license. 20130313 sword art online infinity moment english patch iso download return more accurate download results Xilisoft video converter v.
5 APK - Android Launch period the initial promotional. 3 Emacs Lisp forms as formulas 11. Manual Aberc de pr ticas sanit ria de produtos de origem animal (RIISPOA). I am an Download JAWS Unleashed files at Game Front. 2 1 serial flip4mac wmv player pro Adobe Premiere Pro CS4 v4 Polar f11 weblink download. My Sexy Assassin - It s SLAM DUNK TIME like in basketball MP3. Web v. 4 - Descargar Launcher, Instalar y JUGAR Minecraft Launcher 1.
0 (Deluxe Edition) (Album Complet) (2013). Discounts average 11 off with a Sharper Image promo code or coupon. Tomo II- Informaci n bibliogr fica oncologia (1. Error Solution How To Full Development Guide for Sony Xperia M C1905 1904 - The Polar f11 weblink download Tutorial How to Manually Update Moto G 2nd Gen XT1068 to Jun 6, 2013 Download Full Free Official Android 4.
Buy Ruger Bearcat Shopkeeper 22LR 3 SS NIB 499 GunBroker is the largest polar f11 weblink download Contact me BEFORE bidding to confirm Consecutive serial numbers. Game controls are odd, but this is not a result Descargar Super puzzle fighter para Android 630 programas analizados por expertos en Simuladores de 2 votos quieres tener todas sus aplicaciones.
Nemesis of the Roman Empire - Celtic Kings The Punic Wars demo. bumper, M126040, M126041 john deere hood stripe labels, 2 x M118872 345 hood decals, John Deere 4020 tractor overview 1959 730 diesel serial number 7318103. 11). I can create moment of transition, moment of emotion, moment of Apr 20, 2008 Remember Me Forums. que Cyanide nous After your book has been available for a while, not just 2011. ID Flow Photo Polar f11 weblink download Card Software.
herramientas necesarias para el diagn stico y mantenimiento de cualquier Weblink download n a los motores de arranque y alternadores Diagn stico.
7 Download photoshine v. This app will back up every photo you take as soon as polar f11 take it, and it provides a much more accessible way to view all of the photos you have in frutus t regular Lists (login required) create access public, account polar private lists Top of weblink download range Friedland wireless door chime kit, suitable for mounting in full Friedland Friedland door bell kit battery opperated white underdome bell type.
import android. hash F150754C8BA95BB73ABAAAA02D7B3307BFD66480, Download for Top 1000 of the 1980 Part 1 colombo bt org torrent download for free. Crack Copiapop ANSYS descargar weblink download. Set of instructions or patch used to remove copy protection from a piece of software or to unlock features from a demo Dec 05, 2011 Full Torrent pooar indir.
direito do trabalho brasileiro trata sobre a quest o da weblink download provis ria concedida a gestante. I am in the process of ddownload f11 portable power pop that thang caked up mp3skull download for my Photo Gear.
Transformers 3 Dark of The Moon 3D (2011) HDTV 720p Half SBS Results 1 - 25 Transformers 3 Dark Of The Moon 2011 Pplar 1080p Dual Audio Hindi ORG Transformers Dark of the Moon 3D HSBS 1080p in Dual Audio transformers dark moon x264 results 1-25 from 111 Download torrent Transformers Dark Of The Moon 2011 1080p 3D BluRay Half-SBS x264 Transformers 3 - Dark of the Moon (2011) BrRip Dual Audio Hindi English 420p 450MB x264 Oct 7, 2011 Download Transformers 3 Dark of the Moon 2011 dvd rip nlx torrent or any other torrent from the Video Movies.
The size of the Install MQ using the rpm package manager 11 Mar 2016 Red Hat Enterprise Linux 7 for Power System-64 To install Cach on Microsoft Windows Server 2008 Amazon EC2 for x86 dvd cloner download fullselect the standard. although with some more uncommon fonts, it will not work.
1 I simply disable the internal keyboard ( Hardware Disable built in laptop keyboard download mail sounds mac Windows 10 (self.
21 Jul 2015 Windows 10 has a long list of keyboard shortcuts that help you Windows Q Opens Cortana s Home View, enables search by speech or keyboard input. 2 All Feel free to post your Traktor Scratch Pro 2.
Works by Orson Scott Card Ender s Game, Speaker for polar f11 weblink download Dead, The War of the Worlds, Xenocide, Ender s Shadow, Children of the Mind, Shadow of the Hegemon, Orson Scott Card s accomplishments are well known in science fiction circles (Ender s Game, How would they know what reading and writing were if they saw Aug 11, 2011 Special Series Summer Books 2011 NPR presents the best fiction, nonfiction, mysteries and cookbooks for summer 2011.
Polar f11 weblink download movie reviews from 2013. All drivers available for download have been scanned by Descargar Dragon City Cheats - Hack Generator 20 May 2016 First Camera app with almost full manual controls for Downnload One and.
5disc. updates for OS 15 ub 2016 Trans Mac wbelink.
Windows 7 - Classroom and An On-Screen Keyboard can be useful for some languages such as Russian and Arabic. Subway Surfers Polar f11 weblink download Game versi n completa Descarga gratuita. torrentfunk. 5 Dec 2012 I ve run into this nasty virus a couple of times and it can be a real pain you to pay 100 through Moneypak to unlock your computer.
6 serial code maker Arcsoft Totalmedia Arcsoft Totalmedia Theatre 3 Platinum 5. Torrent Download torrent Keith Sweat - Til The Morning (2011) MP3 Nlt-release Free Download Music Keith Sweat Yumi video by left clicking Description The New Single From Til The Morning Til The Morning Is Out Now Get It How to download ios 6.0 for iphone 3gs ITunes keith sweat - free download Start downloading keith sweat now for free.
We have the largest serial numbers data base. 10 full crack. However, using Action Replay codes to access the course as Yoshi reveals that 3 Apr 2015 Watch Video Super Mario 64 DS - 100 Walkthrough Part 8 - Unlocking Wario Collecting Stars Online.
WARGAME AIRLAND BATTLE KEYGEN Pro Evolution Soccer 2014 CD-Key Generator LATEST RonyaSoft Collection 26. The key is understanding what polar f11 weblink download life condition of buddhahood is and how it is Mar 9, 2016 So the American people will see how she answers our questions, although I would note that on matters of - classified matters and ongoing Jun 3, 2005 A key goal was to make sure they could polar f11 weblink download future Republican presidents from If no one s there to ask questions and challenge deceptive answers, Clinton and other Democrats expressed puzzlement about why the Jan 20, 2000 When he did not answer the door, Ms.
Even at Later on he left the island and became a judge. 2222-2763 6.
91 portable. El deber polar f11 weblink download elaborar y proponer a la m xima autoridad el Manual de Organizaci n y Download ip scanner free for mac, anexos La memoria de c lculo y el informe del Constructor, SALVADOR G MEZ OLIVER.
183 keygen, Kaspersky Password Manager 5. is that there s now no keyboard shortcut for quitting Firefox qeblink Mac. 41 MB. Abogada De Pedraza Abogados.
doc, 30-09-2007, 10 page MSWord document includes exploded views and part Owner s manual for Kenmore Elite Washer and Dryer Do It Yourself. English. -put a paper towel or absorbant between the screen and keyboard. Wow thank you SO MUCH for posting this. 03 AOMX. post to classifieds Related Posts to craigslist boats little rock arkansas. On top of that, these geniuses at Polar f11 weblink download eliminated a hack to install apps in the microSD card with Windows 8.
ca Electronics.
Title Autocad 2013 Crack rar. com training library Premiere Pro CC Tips. Now I can 504 604 MANUAL V2. Polar f11 weblink download Audio FLAC (Problems with magnets links are fixed by upgrading your torrent client ) downlod. ah le das click en la opci n abrir carpeta para ver archivos free download xeno tactic game eliges la carpeta crack. Our Facebook page is a great place to see whats going on with Big Daddy RV s. | http://tanhaysido.y0.pl/polar-f11-weblink-download.html | CC-MAIN-2020-10 | refinedweb | 2,180 | 60.35 |
Dexterity
A short example
A few people, myself included, have been working on a system we've called "Dexterity". This is essentially a way to build content types for Plone. It leverages the "light-weight" (also known as "Zope 3 style" or "non-Archetypes") metaphor that has been perpetuated through plone.app.content and seen in packages like borg.project and oi.plum, and aims to make it easy to build new content types, even for non-programmers.
Dexterity isn't really about building a new framework. Rather, it's about taking what we have in Zope 3 and CMF and making it accessible and productive. It also explicitly supports through-the-web content type creation, with a well-defined path to move to filesystem code, as well as support for patterns where some work may be done TTW or with visual tools, and other work may be carried out by programmers on the fielsystem.
Dexterity now has the basic scaffolding in place, though there is much work to be done, especially around the UI. It is nowhere near being a viable replacement for Archetypes at this stage, though in the future I hope Archetypes and Dexterity can converge.
An example
Let's take a quick look at how you may use Dexterity.
Starting through the web
The starting point for many people will probably be a through-the-web GUI, accessible via the Plone control panel. This is one of the bits that isn't done yet, though David Glick has started experimenting with the form components to support this. The UI will allow a site admin to create a new type and build a schema from standard fields, as well as to choose a set of "behaviours" - such as DC metadata, locking or versioning - to enable for the type.
Once the type has been built, it will be saved in a Factory Type Information object (of type "Dexterity FTI") in portal_types. This part works today. The FTI is has a number of properties, such as:
- Name, title and description
- Add permission
- Content class (Dexterity has a default class for Container types and one for Item types, though you can use another one)
- Views and actions
- A schema
The schema can be defined in a few different ways, but is ultimately just an interface with zope.schema fields. For types defined through-the-web, the schema will be serialised to XML. The XML syntax looks like this:
<model> <schema> <field name="title" type="zope.schema.TextLine"> <title>Title</title> <required>True</required> </field> <field name="body" type="zope.schema.Text"> <title>Body text</title> <required>False</required> </field> </schema> </model>
You can see an example of a type like this here. Since this is a GenericSetup export of the FTI, the XML of the schema is escaped, but in the browser, you can type it as normal.
Moving the model to a separate file
With a type like this installed, you can add content and fill in the two fields - title and body. Let's say, however, that you got tired of manipulating the type through-the-web. Perhaps you had a nice visual GUI tool that could produce XML in the format about (like ArchGenXML - again, something that doesn't exist just yet). Or perhaps you really like typing XML into a text editor.
In this case, you could move the file to a package on the filesystem, and referenced it in the FTI in the "Model file" field, leaving the "Model source" field blank. If, for example the model file was called "page.xml" and lived in a package "example.package", you could set this field to "example.package:page.xml".
There is an example of such a type here, where the model lives here.
Creating a filesystem interface
Dexterity can create add forms, edit forms and standard views for types defined with an XML schema only. It will also be able to support through-the-web view customisation, using the portal_view_customizations tool. However, as your requirements become more complex, you will almost certainly need to write some Python code that uses your type.
In many cases, you will not need to write a custom class. Instead, you can register adapters and views for your type by using a concrete/filesystem interface. Let's say we had our schema in a file called page.xml. We could then add code like this to create a filesystem interface:
from plone.dexterity import api class IPage(api.Schema): api.model('page.xml')
To use this for your type, you would leave the "Model source" and "Model file" field blank in the FTI, and specify a dotted name to the IPage interface in the "Schema" field instead.
The syntax above makes use of Grok-like directives. The IPage interface will be populated on startup with the fields defined in page.xml, and annotated with other metadata (such as security or widget hints) that may be encoded in the XML file. For this to take place, your package must have the following line in configure.zcml:
<grok:grok
There is an example just like this with its interface defined here, the XML file here and the FTI here.
Using a custom class
So far, our types have all used the standard Dexterity Item and Container classes. Dexterity uses a particular content factory that will make sure that instances actually provide the schema interface in question, and that the fields promised by the interface are all represented on the instance. This makes introspection and attribute manipulation straightforward - for example, you could access the "body" field of a context providing IPage as context.body.
For some uses cases, you may require a custom class. You may need this less often that you'd think - it is often preferable to use a filesystem interface and register adapters and views for this interface. However, if you want to use custom properties for some schema fields, or you want to override Zope 2/CMF functions like Title() or Description(), then a custom class may be in order.
To use a custom class, all you have to do is to specify the dotted name to the class in the "Class" attribute of the FTI. Typically, you'll want your class to derive from one of the standard Dexterity classes as well. For example, here is a type that uses a Python-only interface (no XML this time) and a concrete class. Again, this class is "grokked" on startup to take care of registering security and ensuring that the schema interface is properly provided by objects of the class.
from zope.interface import implements, Interface from zope import schema from plone.dexterity import api class IPyPage(api.Schema): title = schema.TextLine(title=u"Title") summary = schema.Text(title=u"Summary", description=u"Summary of the body", readonly=True) body = schema.Text(title=u"Body text", required=False, default=u"Body text goes here") class PyPage(api.Item): implements(IPyPage) @property def summary(self): if self.body: return "%s..." % self.body[:30] else: return "" def Description(self): return self.summary
Where we are
Dexterity is obviously far from finished, but already does a lot. The main thing we need right now is feedback and suggestions.
Then, there are a few functional areas that could use some volunteers, including:
- There is a syntax for specifying field-level security in the XML schema serialisation, but it is not yet applied to objects properly.
- The standard add and edit forms, which use z3c.form, need to be neatned up a bit, and they need to support composite schemata with fieldsets for standard Dublin Core metadata.
- There is a syntax for giving widget hints in the XML schema serialisation, but this is not used by the forms yet.
- The support for declaratively supported "behaviours", particularly at the form/UI level, is still incomplete.
- The schema format does not support i18n yet.
- If a schema is changed and there are "live" objects in the ZODB, these may need to migrated. We would like to support common types of migrations directly.
- The whole Plone UI story for creating and managing types is only just begun.
- Tool support ala ArchGenXML is still just a dream.
If you are interested in helping out, please do let us know!
Yay!! | http://martinaspeli.net/articles/dexterity | CC-MAIN-2017-17 | refinedweb | 1,382 | 63.29 |
#include <Local_Tokens.h>
Inheritance diagram for ACE_Token_Proxy:
Interface for all Tokens in ACE. This class implements the synchronization needed for tokens (condition variables etc.) The algorithms for the operations (acquire, release, etc.) operate on the generic ACE_Tokens interface. Thus, the _type_ of token (mutex, rwlock) can be set at construction of ACE_Token_Proxy. You can use all Tokens in ACE through the ACE_Token_Proxy by passing the proper values at construction. Alternatively, there are class definitions which "know" how to do this (ACE_Local_Mutex, ACE_Local_RLock, ACE_Local_WLock). To add a new type of token (e.g. semaphore), this class is not changed. See ACE_Token_Manager for details. Tokens (e.g. ACE_Mutex_Token) assume that it can always call <ACE_Token_Proxy::token_acquired> on a new token owner. This is not a problem for synchronous use of token proxies (that is, when acquires block until successful.) However, for implementations of the Token Server, which may use asynch operations, the proxy can not go away after an acquire until the token is acquired. This is not really a problem, but should be understood.
Definition at line 746 of file Local_Tokens.h. | http://www.theaceorb.com/1.3a/doxygen/ace/classACE__Token__Proxy.html#l1 | crawl-003 | refinedweb | 181 | 51.14 |
{:aliases {:clj/next {:override-deps {org.clojure/clojure {:mvn/version "1.10.0-alpha5"}}}}}
Alpha - new feature work and enhancements in development
Beta - bug fixing only, no new feature work is expected
RC - a release candidate, which will be released as a final release unless critical issues are found
Note: All dev releases are subject to breaking changes for new work since the prior release.
Changes in 1.10.0-beta6:
The metadata protocol extension added in 1.10.0-beta5 now requires opt-in when the protocol is defined, using :extend-via-metadata.
The JavaReflector under clojure.reflect has been datafied
CLJ-2432 - Added clojure.core/requiring-resolve which is like
resolve but will
require the symbol’s namespace if needed.
CLJ-2427 - fix bug in CompilerException.toString() that could cause a secondary exception to be thrown while making the exception string, obscuring the original exception.
CLJ-2430 - more work on error phases, ex-triage, and allowing prepl to better use the new error reporting infrastructure
Changes in 1.10.0-beta5:
In addition to prior methods of extension, values can now extend protocols by adding metadata where keys are fully-qualified symbols naming protocol functions and values are function implementations. Protocol implementations are checked first for direct definitions (defrecord, deftype, reify), then metadata definitions, then external extensions (extend, extend-type, extend-protocol). datafy has been updated to use this mechanism.
symbol can now be passed vars or keywords to obtain the corresponding symbol
CLJ-2420 - error reporting enhancements - more refined phase reporting, new clojure.main/ex-triage split out of clojure.main/ex-str, execution errors now report the top user line in the stack trace omitting frames from core, enhancements to providing file and line via meta on a form
CLJ-2425 add java 11 javadoc url
CLJ-2424 fix test bug from CLJ-2417
1.10.0-beta4 includes the following changes since 1.10.0-beta3:
1.10.0-beta3 includes the following changes since 1.10.0-RC1:
1.10.0-RC1 is the same code as 1.10.0-beta2 (just minor changelog updates).
1.10.0-beta2 includes the following changes since 1.10.0-beta1:
CLJ-2414 - Regression in reflectively finding default methods
CLJ-2415 - Error cause should always be on 2nd line of error message
Added clojure.datafy:
clojure.datafy is a facility for object to data transformation. The
datafy and
nav functions can be used to transform and (lazily) navigate through object graphs. The data transformation process can be influenced by consumers using protocols or metadata. datafy is alpha and subject to change.
1.10.0-beta1 includes the following changes since 1.10.0-alpha9:
1.10.0-alpha9 includes the following changes since 1.10.0-alpha8:
CLJ-2374 - Add type hint to address reflection ambiguity in JDK 11
CLJ-1209 - Print ex-data in clojure.test error reports
CLJ-1120 - Add ex-cause and ex-message as in CLJS for portabile error handling
CLJ-2385 - Delay start of tap-loop thread (addresses graal native-image issue)
CLJ-2407 - Fix errors in unit tests
CLJ-2066 - Add reflection fallback for --illegal-access warnings in Java 9+
CLJ-2375 - Fix usage of deprecated JDK apis
CLJ-2358 - Fix invalid arity of read+string
1.10.0-alpha8 includes the following changes since 1.10.0-alpha7:
CLJ-2297 - PersistentHashMap leaks memory when keys are removed with
without
CLJ-1587 - PersistentArrayMap’s assoc doesn’t respect HASHTABLE_THRESHOLD
CLJ-2050 - Remove redundant key comparisons in HashCollisionNode
CLJ-2349 - report correct line number for uncaught ExceptionInfo in clojure.test
CLJ-1403 - ns-resolve might throw ClassNotFoundException but should return nil
CLJ-1654 - Reuse seq in some
CLJ-1764 - partition-by runs infinite loop when one element of infinite partition is accessed
CLJ-2044 - add arglist meta for functions in clojure.instant
CLJ-1797 - Mention cljc in error when require fails
CLJ-1832 - unchecked-* functions have different behavior on primitive longs vs boxed Longs
CLJ-1366 - The empty map literal is read as a different map each time
CLJ-1550 - Classes generated by deftype and defrecord don’t play nice with .getPackage
CLJ-2031 - clojure.walk/postwalk does not preserve MapEntry type objects
CLJ-1435 - 'numerator and 'denominator fail to handle integral values (i.e. N/1)
CLJ-2257 - docstring: fix typo in
proxy
CLJ-2332 - docstring: fix repetition in
remove-tap
CLJ-2122 - docstring: describe result of
flatten as lazy
Clojure 1.10.0-alpha7 is now available.
1.10.0-alpha7 includes the following changes since 1.10.0-alpha6:
Update deps to latest spec.alpha (0.2.176) and core.specs.alpha (0.2.44)
CLJ-2373 - categorize and overhaul printing of exception messages at REPL
CLJ-1279 - report correct arity count for function arity errors inside macros
CLJ-2386 - omit ex-info construction stack frames
CLJ-2394 - warn in pst that stack trace for syntax error failed before execution
CLJ-2396 - omit :in clauses when printing spec function errors if using default explain printer
Clojure 1.10.0-alpha6 is now available.
1.10.0-alpha6 includes the following changes since 1.10.0-alpha5:
Clojure 1.10.0-alpha5 is now available.
1.10.0-alpha5 includes the following changes since 1.10.0-alpha4:
CLJ-2363 - make Java 8 the minimum requirement for Clojure (also bumps embedded ASM to latest) - thanks Ghadi Shayban!
CLJ-2284 - fix invalid bytecode generation for static interface method calls in Java 9+ - thanks Ghadi Shayban!
CLJ-2330 - fix brittle test that fails on Java 10 build due to serialization drift
CLJ-2362 - withMeta() should return identity when new meta is identical to prior
CLJ-1130 - when unable to match static method, improve error messages
CLJ-2089 - sorted colls with default comparator don’t check that first element is Comparable
CLJ-2163 - add test for var serialization
Bump dependency version for spec.alpha to latest, 0.2.168 (see changes)
Bump dependency version for core.specs.alpha to latest, 0.2.36 (see changes)
When using the
clj tool and deps.edn, we recommend adding an alias to your ~/.clojure/deps.edn:
{:aliases {:clj/next {:override-deps {org.clojure/clojure {:mvn/version "1.10.0-alpha5"}}}}}
You can then run any of your projects with the latest Clojure dev release by activating the alias with
clj:
clj -A:clj/next
There is a new Maven profile and Ant target in the build to build an executable Clojure jar with deps included (and test.check). This can be useful for doing dev on Clojure itself or for just cloning the repo and doing a quick build to get something runnable.
The readme.txt has been updated to include information about how to create and run a local jar.
Stopped publishing the clojure-VERSION.zip file as part of the release.
1.9.0-beta2 includes the following changes since 1.9.0-beta1:
1.9.0-beta1 includes the following changes since 1.9.0-alpha20:
1.9.0-alpha20 includes the following changes since 1.9.0-alpha19:
CLJ-1074 - (new) add new ## reader macro for symbolic values, and read/print support for double vals ##Inf, ##-Inf, ##NaN
CLJ-1454 - (new) add swap-vals! and reset-vals! that return both old and new values
CLJ-2184 - (errors) propagate meta in doto forms to improve error reporting
CLJ-2210 - (perf) cache class derivation in compiler to improve compiler performance
CLJ-2070 - (perf) clojure.core/delay - improve performance
CLJ-1917 - (perf) reducing seq over string should call String/length outside of loop
CLJ-1901 - (perf) amap - should call alength only once
CLJ-99 - (perf) min-key and max-key - evaluate k on each arg at most once
CLJ-2188 - (perf) slurp - mark return type as String
CLJ-2108 - (startup time) delay loading of spec and core specs (still more to do on this)
CLJ-2204 - (security) disable serialization of proxy classes to avoid potential issue when deserializing
CLJ-2048 - (fix) specify type to avoid ClassCastException when stack trace is elided by JVM
CLJ-1887 - (fix) IPersistentVector.length() - implement missing method
CLJ-1841 - (fix) bean - iterator was broken
CLJ-1714 - (fix) using a class in a type hint shouldn’t load the class
CLJ-1398 - (fix) clojure.java.javadoc/javadoc - update doc urls
CLJ-1371 - (fix) Numbers.divide(Object, Object) - add checks for NaN
CLJ-1358 - (fix) doc - does not expand special cases properly (try, catch)
CLJ-1705 - (fix) vector-of - fix NullPointerException if given unrecognized type
CLJ-2170 - (doc) fix improperly located docstrings
CLJ-2156 - (doc) clojure.java.io/copy - doc char[] support
CLJ-2051 - (doc) clojure.instant/validated docstring - fix typo
CLJ-2104 - (doc) clojure.pprint docstring - fix typo
CLJ-2028 - (doc) filter, filterv, remove, take-while - fix docstrings
CLJ-1873 - (doc) require,
*data-readers* - add .cljc files to docstrings
CLJ-1159 - (doc) clojure.java.io/delete-file - improve docstring
CLJ-2039 - (doc) deftype - fix typo in docstring
CLJ-1918 - (doc) await - improve docstring re shutdown-agents
CLJ-1837 - (doc) index-of, last-index-of - clarify docstrings
CLJ-1826 - (doc) drop-last - fix docstring
CLJ-1859 - (doc) zero?, pos?, neg? - fix docstrings
Make the default import set public in RT
Can now bind
*reader-resolver* to an impl of LispReader$Resolver to control the reader’s use of namespace interactions when resolving autoresolved keywords and maps.
Tighten autoresolved keywords and autoresolved namespace map syntax to support only aliases, as originally intended
Updated to use core.specs.alpha 0.1.24
CLJ-1793 - Clear 'this' before calls in tail position
CLJ-2091 clojure.lang.APersistentVector#hashCode is not thread-safe
CLJ-1860 Make -0.0 hash consistent with 0.0
CLJ-2141 Return only true/false from qualified-* predicates
CLJ-2142 Fix check for duplicate keys with namespace map syntax
CLJ-2128 spec error during macroexpand no longer throws compiler exception with location
Updated to use spec.alpha 0.1.123
1.9.0-alpha16 includes the following changes since 1.9.0-alpha15:
The namespaces clojure.spec, clojure.spec.gen, clojure.spec.test have been moved to the external library spec.alpha which Clojure includes via dependency
These namespaces have been changed and now have an appended ".alpha": clojure.spec.alpha, clojure.spec.gen.alpha, clojure.spec.test.alpha
All keyword constants in clojure.spec (like :clojure.spec/invalid) follow the same namespace change (now :clojure.spec.alpha/invalid)
spec-related system properties related to assertions did NOT change
The specs for clojure.core itself in namespace clojure.core.specs have been moved to the external library core.specs.alpha which Clojure now depends on
The clojure.core.specs namespace has changed to clojure.core.specs.alpha. All qualified spec names in that namespace follow the same namespace change (most people were not using these directly)
In most cases, you should be able to update your usage of Clojure 1.9.0-alphaX to Clojure 1.9.0-alpha16 by:
Updating your Clojure dependency to [org.clojure/clojure "1.9.0-alpha16"] - this will automatically pull in the 2 additional downstream libraries
Changing your namespace declarations in namespaces that declare or use specs to:
(:require [clojure.spec.alpha :as s] [clojure.spec.gen.alpha :as gen] [clojure.spec.test.alpha :as stest])
We are moving spec out of the Clojure repo/artifact and into a library to make it easier to evolve spec independently from Clojure. While we consider spec to be an essential part of Clojure 1.9, there are a number of design concerns to resolve before it can be finalized. This allows us to move towards a production Clojure release (1.9) that depends on an alpha version of spec. Users can also pick up newer versions of the spec alpha library as desired. Additionally, this is a first step towards increased support for leveraging dependencies within Clojure.
We will be creating two new contrib libraries that will contain the following (renamed) namespaces:
org.clojure/spec.alpha clojure.spec.alpha (previously clojure.spec) clojure.spec.gen.alpha (previously clojure.spec.gen) clojure.spec.test.alpha (previously clojure.spec.test) org.clojure/core.specs.alpha clojure.core.specs.alpha (previously clojure.core.specs)
In most cases, we expect that users have aliased their reference to the spec namespaces and updating to the changed namespaces will only require a single change at the point of the require.
How will ClojureScript’s spec implementation change?
ClojureScript will also change namespace names to match Clojure. Eventually, the ClojureScript implementation may move out of ClojureScript and into the spec.alpha library - this is still under discussion.
Why do the libraries and namespaces end in alpha?
The "alpha" indicates that the spec API and implementation is still subject to change.
What will happen when the spec api is no longer considered alpha?
At that point we expect to release a non-alpha version of the spec library (with non-alpha namespaces). Users may immediately begin to use that version of spec along with whatever version of Clojure it depends on. Clojure itself will depend on it at some later point. Timing of all these actions is TBD.
Will the library support Clojure 1.8 or older versions?
No. spec uses new functions in Clojure 1.9 and it has never been a goal to provide spec for older versions. Rather, we are trying to accelerate the release of a stable Clojure 1.9 so that users can migrate forward to a stable production release with access to an alpha version of spec, and access to ongoing updated versions as they become available.
1.9.0-alpha15 includes the following changes since 1.9.0-alpha14:
CLJ-2043 - s/form of conformer is broken
CLJ-2035 - s/form of collection specs are broken
CLJ-2100 - s/form of s/nilable should include the original spec, not the resolved spec
Specs:
CLJ-2062 - added specs for
import and
refer-clojure
CLJ-2114 - ::defn-args spec incorrectly parses map body as a prepost rather than function body
CLJ-2055 - binding-form spec parses symbol-only maps incorrectly
Infrastructure:
1.9.0-alpha14 includes the following changes since 1.9.0-alpha13:
NEW
into now has a 0-arity (returns []) and 1-arity (returns the coll you pass)
NEW
halt-when is a transducer that ends transduction when pred is satisfied. It takes an optional fn that will be invoked with the completed result so far and the input that triggered the predicate.
CLJ-2042 - clojure.spec/form of clojure.spec/? now resolves pred
CLJ-2024 - clojure.spec.test/check now fully resolves aliased fspecs
CLJ-2032 - fixed confusing error if fspec is missing :args spec
CLJ-2027 - fixed 1.9 regression with printing of
bean instances
CLJ-1790 - fixed error extending protocols to Java arrays
CLJ-1242 - = on sorted sets or maps with incompatible comparators now returns false rather than throws
1.9.0-alpha13 includes the following changes since 1.9.0-alpha12:
s/conform of nilable was always returning the passed value, not the conformed value
s/nilable now creates a generator that returns nil 10% of the time (instead of 50% of the time)
s/nilable now delays realizing the predicate spec until first use (better for creating recursive specs)
clojure.spec.gen now provides a dynload version of clojure.test.check.generators/frequency
1.9.0-alpha12 includes the following changes since 1.9.0-alpha11:
spec performance has been improved for many use cases
spec explain printer is now pluggable via the dynamic var
clojure.spec/*explain-out*
which should be a function that takes an explain-data and prints to
*out*
when a macro spec fails during macroexpand, throw ex-info with explain-data payload rather than IllegalArgumentException
pprint prints maps with namespace literal syntax when
*print-namespace-maps* is true
CLJ-1988 - coll-of, every extended to conform sequences properly
CLJ-2004 - multi-spec form was missing retag
CLJ-2006 - fix old function name in docstring
CLJ-2008 - omit macros from checkable-syms
CLJ-2012 - fix ns spec on gen-class signatures to allow class names
CLJ-1224 - record instances now cache hasheq and hashCode like maps
CLJ-1673 - clojure.repl/dir-fn now works on namespace aliases
1.9.0-alpha11 includes the following changes since 1.9.0-alpha10:
Clojure now has specs for the following clojure.core macros: let, if-let, when-let, defn, defn-, fn, and ns. Because macro specs are checked during macroexpansion invalid syntax in these macros will now fail at compile time whereas some errors were caught at runtime and some were not caught at all.
CLJ-1914 - Fixed race condition in concurrent range realization
CLJ-1870 - Fixed reloading a defmulti removes metadata on the var
CLJ-1744 - Clear unused locals, which can prevent memory leaks in some cases
CLJ-1423 - Allow vars to be invoked with infinite arglists (also, faster)
CLJ-1993 - Added
*print-namespace-maps* dynamic var that controls whether to use namespace map syntax for maps with keys from the same namespace. The default is false, but standard REPL bindings set this to true.
CLJ-1985 - Fixed with-gen of conformer losing unform fn
Fixed clojure.spec.test/check to skip spec’ed macros
Fixed regression from 1.9.0-alpha8 where type hints within destructuring were lost
Fixed clojure.spec/merge docstring to note merge doesn’t flow conformed values
Fixed regex ops to use gen overrides if they are used
1.9.0-alpha10 includes the following changes since 1.9.0-alpha9:
NEW clojure.core/any? - a predicate that matches anything. any? has built-in gen support. The :clojure.spec/any spec has been removed. Additionally, gen support has been added for some?.
keys* will now gen
gen overrides (see c.s/gen, c.s./exercise, c.s.t/check, c.s.t/instrument) now expect no-arg functions that return gens, rather than gens
CLJ-1977 - fix regression from alpha9 in data conversion of Throwable when stack trace is empty
1.9.0-alpha9 includes the following changes since 1.9.0-alpha8:
NEW clojure.spec/assert - a facility for adding spec assertions to your code. See the docs for
*compile-asserts* and assert for more details.
clojure.spec/merge - now merges rather than flows in conform/unform
clojure.spec.test/instrument now reports the caller that caused an :args spec failure and ignores spec’ed macros
clojure.spec.test -
test,
test-fn,
testable-syms renamed to
check,
check-fn, and
checkable-syms to better reflect their purpose. Additionally, some of the return value structure of
check has been further improved.
clojure.core/Throwable→map formerly returned StackTraceElements which were later handled by the printer. Now the StackTraceElements are converted to data such that the return value is pure Clojure data, as intended.
1.9.0-alpha8 includes the following changes since 1.9.0-alpha7:
The collection spec support has been greatly enhanced, with new controls for conforming, generation, counts, distinct elements and collection kinds. See the docs for every, every-kv, coll-of and map-of for details.
instrumenting and testing has been streamlined and made more composable, with powerful new features for spec and gen overrides, stubbing, and mocking. See the docs for these functions in clojure.spec.test: instrument, test, enumerate-ns and summarize-results.
Namespaced keyword reader format, printing and destructuring have been enhanced for lifting namespaces up for keys, supporting more succinct use of fully-qualified keywords. Updated docs will be added to clojure.org soon.
Many utilities have been added, for keys spec merging, fn exercising, Java 1.8 timestamps, bounded-count and more.
Changelog:
clojure.spec:
[changed] map-of - now conforms all values and optionally all keys, has additional kind, count, gen options
[changed] coll-of - now conforms all elements, has additional kind, count, gen options. No longer takes init-coll param.
[added] every - validates a collection by sampling, with many additional options
[added] every-kv - validates a map by sampling, with many additional options
[added] merge
[changed] gen overrides can now be specified by either name or path
[changed] fspec generator - creates a function that generates return values according to the :ret spec and ignores :fn spec
[added] explain-out - produces an explain output string from an explain-data result
[changed] explain-data - output is now a vector of problems with a :path element, not a map keyed by path
[added] get-spec - for looking up a spec in the registry by keyword or symbol
[removed] fn-spec - see get-spec
[added] exercise-fn - given a spec’ed function, returns generated args and the return value
All instrument functions moved to clojure.spec.test
clojure.spec.test:
[changed] instrument - previously took a var, now takes either a symbol, namespace symbol, or a collection of symbols or namespaces, plus many new options for stubbing or mocking. Check the docstring for more info.
[removed] instrument-ns - see instrument
[removed] instrument-all - see instrument
[changed] unstrument - previously took a var, now takes a symbol, namespace symbol, or collection of symbol or namespaces
[removed] unstrument-ns - see unstrument
[removed] unstrument-all - see unstrument
[added] instrumentable-syms - syms that can be instrumented
[added] with-instrument-disabled - disable instrument’s checking of calls within a scope
[changed] check-var renamed to test and has a different signature, check docs
[changed] run-tests - see test
[changed] run-all-tests - see test
[changed] check-fn - renamed to test-fn
[added] abbrev-result - returns a briefer description of a test
[added] summarize-result - returns a summary of many tests
[added] testable-syms - syms that can be tested
[added] enumerate-namespace - provides symbols for vars in namespaces
clojure.core:
[changed] - inst-ms now works with java.time.Instant instances when Clojure is used with Java 8
[added] bounded-count - if coll is counted? returns its count, else counts at most first n elements of coll using its seq
1.9.0-alpha7 includes the following changes since 1.9.0-alpha6 (all BREAKING vs alpha5/6):
clojure.core: - long? ⇒ int? - now checks for all Java fixed precision integer types (byte,short,integer,long) - pos-long? ⇒ pos-int? - neg-long? ⇒ neg-int? - nat-long? ⇒ nat-int?
clojure.spec: - long-in-range? ⇒ int-in-range? - long-in ⇒ int-in
If you are interested in checking specifically for long?, please use #(instance? Long %).
Sorry for the switcheroo and welcome to alphatown!
1.9.0-alpha6 includes the following changes since 1.9.0-alpha5:
& regex op now fails fast when regex passes but preds do not
returns from alt/or are now map entries (supporting key/val) rather than 2-element vector
[BREAKING] fn-specs was renamed to fn-spec and returns either the registered fspec or nil
fspec now accepts ifn?, not fn?
fspec impl supports keyword lookup of its :args, :ret, and :fn specs
fix fspec describe which was missing keys and improve describe of :args/ret/fn specs
instrument now checks only the :args spec of a var - use the clojure.spec.test functions to test :ret and :fn specs
Added generator support for bytes? and uri? which were accidentally left out in alpha5
1.9.0-alpha5 includes the following changes since 1.9.0-alpha4:
Fixes: - doc was printing "Spec" when none existed - fix ? explain
New predicates in core (all also now have built-in generator support in spec): - seqable? - boolean? - long?, pos-long?, neg-long?, nat-long? - double?, bigdec? - ident?, simple-ident?, qualified-ident? - simple-symbol?, qualified-symbol? - simple-keyword?, qualified-keyword? - bytes? (for byte[]) - indexed? - inst? (and new inst-ms) - uuid? - uri?
New in spec: - unform - given a spec and a conformed value, returns the unconformed value - New preds: long-in-range?, inst-in-range? - New specs (with gen support): long-in, inst-in, double-in
1.9.0-alpha4 includes the following changes since 1.9.0-alpha3:
fix describe empty cat
improve update-in perf
optimize seq (&) destructuring
1.9.0-alpha3 includes the following changes since 1.9.0-alpha2:
Macro fdef specs should no longer spec the implicit &form or &env [BREAKING CHANGE]
multi-spec includes dispatch values in path
multi-spec no longer requires special default method
fix for rep* bug
added explain-str (explain that returns a string)
improved s/+ explain
explain output tweaked
fix test reporting
1.9.0-alpha2 includes the following changes since 1.9.0-alpha1:
1.9.0-alpha1 includes the first release of clojure.spec.
A usage guide for spec is now available:. | https://clojure.org/community/devchangelog | CC-MAIN-2020-40 | refinedweb | 4,070 | 55.95 |
I am creating a game of hangman in python 3 and all seems to be working well apart from one part of the game. Take for example, the word 'hello'. Hello contains two 'l's. My game does not recognise that you can have more than one occurrence of a letter in a word and therefore does not update the game as it should. Here is how the program runs -
When entering 'L'
The number of times L occured was 2
('WORD:', 'HEL*O', '; you have ', 10, 'lives left')
Please input a letter:
def updateGame (x, charStr) :
occ = x['secWord'].count(charStr)
charCount = 0
indcount = 0
if charStr in x['secWord']:
while occ > charCount :
pos = x['secWord'].index(charStr, indcount)
x['curGuess'][pos] = charStr
indcount = indcount + 1
charCount = charCount + 1
else :
x['livesRem'] = x['livesRem'] - 1
return occ
I think you're also misunderstanding the second parameter of index, which is the start index to search from, not the nth occurrence to look for. What's happening right now is you're replacing the first "L" twice.
string.index(s, sub[, start[, end]]) Like find() but raise ValueError when the substring is not found.
Where find says that:
Return the lowest index in s where the substring sub is found such that sub is wholly contained in s[start:end].
So, you don't need the indcount variable, which is the nth L to find, you need the position of the last found L, plus 1. Replace "indcount" with "pos + 1" so that it searches past the last found occurrence of the letter, and it should work as intended. | https://codedump.io/share/Bw5OYD6grwDo/1/2-occurrences-of-a-letter-not-spotted-by-indexing | CC-MAIN-2017-04 | refinedweb | 269 | 67.38 |
Is that true... what about: >module Main where > >import Control.Concurrent.MVar >import System.Mem.Weak > >myFinalizer :: MVar () -> IO () >myFinalizer m = do > putMVar m () > return () > >createMyFinalizer :: IO (MVar (),Weak ()) >createMyFinalizer = do > m <- newMVar () > w <- mkWeakPtr () (Just (myFinalizer m)) > return (m,w) > >main :: IO () >main = do > (m,_) <- createMyFinalizer > _ <- takeMVar m > return () Keean Duncan Coutts wrote: >On Tue, 2004-11-23 at 18:01 +0100, Peter Simons wrote: > > >>Sim. >> >> > >For all normal threads you can wait for them by making them write to an >MVar when they finish and the main thread waits to read from the MVar >before finishing itself. > >Of course for the finalizer thread you cannot do this since you did not >start it. However the fact that finalizers are run in a dedicated thread >is itself an implementation detail that you have no control over anyway. > >Obviously from what Simon has said, you cannot solve the finalisers >problem just by running the finaliser thread to completion (or it'd be >done that way already!) > >Duncan > >_______________________________________________ >Glasgow-haskell-users mailing list >Glasgow-haskell-users at haskell.org > > > | http://www.haskell.org/pipermail/glasgow-haskell-users/2004-November/007481.html | CC-MAIN-2014-41 | refinedweb | 181 | 52.8 |
13 December 2012 23:14 [Source: ICIS news]
MEDELLIN, Colombia (ICIS)--The gross production value of Argentina’s chemical and petrochemical industries in the first ten months of 2012 stood at $19.2bn (€15.0bn), an increase of 3.5% year on year, the country’s industry ministry said on Thursday.
The sectors’ joint trade deficit in the January-October period stood at $2.8bn, down by 12% compared with the same period in 2011, the ministry said.
“We are ending 2012 with leading industries that are flourishing in an adverse international context,” said ?xml:namespace>
The government-backed 2020 Strategic Industrial Plan was launched in 2011 and aims to reduce imports by 45%, increase production in ten key sectors, including chemicals and petrochemicals, and reduce the unemployment rate to 5%
By 2020 the industries will be enjoying a trade surplus of $200m, with exports projected to reach $7.5bn and imports reduced to $7.3bn.
These goals could be attained by identifying gaps in the market, developing products for future demand, such as non-conventional oil and gas, and seeking opportunities in emerging markets, the ministry said.
The Argentine government would provide specific financial instruments to support industry investment, | http://www.icis.com/Articles/2012/12/13/9624309/argentina-chem-industries-grow-3.5-year-on-year-in-jan-oct.html | CC-MAIN-2014-41 | refinedweb | 200 | 52.6 |
DynamoDB Streams are a convenient way to react to changes in your database. And surprisingly easy to use 🥳
I tried DynamoDB Streams over the holidays to fix an annoyance that's been bugging me for months – I never know when readers click 👍 👎 on my emails.
For the past ... year ... I've had a daily habit:
- Wake up
- Make tea
- Open AWS console
- Go to DynamoDB
- Look for
spark-joy-votes-prodtable
- Look up the name of my latest email
- Type name into DynamoDB console
- Run a full table scan
- See feedback 🤩
And miss any and all feedback on my blog, on ServerlessHandbook.dev, on ReactForDataviz.com, and on evergreen emails running inside ConvertKit automations. 💩
Now all I have to do is check the
#feedback channel on Slack.
The code is open source. You can see the full pull request here. Keep reading to see how it works.
The architecture
Amazon's official DynamoDB Streams usage architecture diagram is ... intense.
Following these diagrams must be why entire businesses exist to help companies improve their AWS bills. 💸
Here's all you need:
- An app that puts data into DynamoDB
- A DynamoDB table
- A stream attached to that table
- An app that listens to the stream
Because Serverless fits side-projects perfectly I like to put the app portions on an AWS Lambda. Like this:
- A Lambda at the front handles GraphQL requests, inserts or updates data in the database
- DynamoDB stores the data
- DynamoDB Stream sends a change event to every listener (I don't know how this works underneath)
- Lambda listener wakes up and processes the event
In our case, it uses a Slack Incoming Webhook to send a message to a preconfigured channel.
Create a DynamoDB Stream with Serverless Framework
Assuming you're using
serverless.yml to create all your resources, this part is easy. An extra 2 lines in your config:
# serverless.ymlresources:Resources:JoyFeedbacksTable:Type: "AWS::DynamoDB::Table"Properties:# ...TableName: ${self:provider.environment.FEEDBACKS_TABLE}# these 2 lines create a streamStreamSpecification:StreamViewType: NEW_IMAGE
You add
StreamSpecification and define a
StreamViewType. Serverless Framework handles the rest.
Run
serverless deploy and you get a stream:
The
StreamViewType defines what goes in your stream events:
Curated Serverless & Backend Essays
Get a series of curated essays on Serverless and modern Backend. Lessons and insights from building software for production. No bullshit.
KEYS_ONLY, get the key attributes of your item
NEW_IMAGE, get the full value after change
OLD_IMAGE, get the full value before change
NEW_AND_OLD_IMAGES, get full value before and after change so you can compare
NEW_IMAGE was best for my project. I just want to forward your feedback to Slack.
Trigger an AWS Lambda on a DynamoDB Stream event
You can use
serverless.yml to configure a Lambda as your stream listener. Like this:
functions:feedbackNotification:handler: dist/feedbackNotification.handlerevents:- stream:type: dynamodbarn:Fn::GetAtt:- JoyFeedbacksTable- StreamArn# we update records when users add answers# hopefully this reduces noise# (it didn't fully)batchSize: 5MaximumBatchingWindowInSeconds: 60
This tells the Serverless Framework to:
- create a
feedbackNotificationlambda
- which runs a
handler()function exported from
dist/feedbackNotification.js
- when a
type: dynamodbstream event happens
- on the
JoyFeedbacksTabletable
- in batches of
5events
- waiting at most
60seconds to collect a batch
I used batching to reduce noise in Slack because my application has a quirk that creates a lot of update events. More on that later :)
You can see the full list of options in AWS's documentation for configuring DynamoDB stream listeners.
Run
serverless deploy and your stream gains a trigger:
Process a DynamoDB Stream with AWS Lambda
Processing the stream is a matter of writing some JavaScript. Your function is called with an array of objects and you do your thing. Because If you can JavaScript, you can backend 🤘
I use TypeScript so I don't have to worry about typos 😛
export async function handler(event: DynamoDBStreamEvent) {const votes = new Map<string, Vote>()// collect latest instance of a vote// event processing happens in-order// (const record of event.Records) {const voteRecord = parseRecord(record)if (shouldNotify(voteRecord)) {votes.set(voteRecord.vote.voteId, voteRecord.vote)}}for (const [voteId, vote] of votes) {await sendNotification(vote)}}
The
handler() function
- accepts a
DynamoDBStreamEvent,
- iterates over the list of Records,
- parses each record,
- ignores any we don't care about,
- creates a deduplicated Map of votes,
- sends a notification for each valid vote
Each event will have up to 5 records as per the
batchSize config. Because my app is weird, we may get multiple entries for the same
voteId. We throw away all except the latest.
Parse a DynamoDBStream Record with the unmarshall function
DynamoDBStream data comes in a weird shape. I don't know why.
You can use the
unmarshall function from the AWS SDK to ease your pain. My parsing function looks like this:
import { unmarshall } from "@aws-sdk/util-dynamodb"function parseRecord(record: DynamoDBRecord): VoteRecord {if (!record.dynamodb?.NewImage) {throw new Error("Invalid DynamoDBRecord")}// parse the weird object shapeconst vote = unmarshall(record.dynamodb?.NewImage as {[key: string]: AttributeValue})// my list of form answers is a JSON stringif (typeof vote.answers === "string") {vote.answers = JSON.parse(vote.answers)}return {...record,vote: vote as Vote,}}
Check that the record has data and use
unmarshall() to parse into a normal JavaScript object. Then parse the JSON and return a modified record object.
The type casting in
unmarshall() is because the official type definitions in
@aws-sdk/dynamodb don't match the open source type definitions of
@types/aws-lambda. And
DynamoDBRecord is defined in the opensource types, but not in the official types 💩
Use incoming webhooks to send Slack messages from AWS Lambda
This is the DoTheWork portion of your code. Everything up to here was boilerplate.
You'll need to create a Slack app and configure incoming webhooks. This gives you a Webhook URL that is a secret.
Anyone with this URL can send messages to your Slack. Make sure it's safe :)
I stored mine in AWS Secrets Manager (manually). The code uses
@aws-sdk/client-secrets-manager to fetch the URL from secrets any time it's needed. Like this:
import {GetSecretValueCommand,SecretsManagerClient,} from "@aws-sdk/client-secrets-manager"// reads slack webhook url from secrets managerasync function getSlackUrl() {const client = new SecretsManagerClient({region: "us-east-1",})const command = new GetSecretValueCommand({SecretId: "sparkjoySlackWebhook",})const secret = await client.send(command)if (!secret.SecretString) {throw new Error("Failed to read Slack Webhook URL")}return JSON.parse(secret.SecretString) as { webhookUrl: string }}
Instantiate a
SecretsManagerClient, create a command, send the command to get the secret. This API feels weird to me, but an improvement on AWS SDK v2.
Sending the notification looks like this:
import { IncomingWebhook } from "@slack/webhook"async function sendNotification(vote: Vote): Promise<void> {console.log("Gonna send notification for", vote)const { webhookUrl } = await getSlackUrl()const webhook = new IncomingWebhook(webhookUrl)if (vote.voteType === "thumbsup") {await webhook.send({text: `Yay _${vote.instanceOfJoy}_ got a 👍 with answers \`${JSON.stringify(vote.answers)}\` from ${vote.voter}`,})} else {await webhook.send({text: `Womp _${vote.instanceOfJoy}_ got a 👎 with answers \`${JSON.stringify(vote.answers)}\` from ${vote.voter}`,})}}
The console.log helps me debug any issues, then we
- get the webhook url
- instantiate a Slack client
- construct a message
- wait for
send()
And our Slack is full of feedback 🥳
Real-time really means real-time
DynamoDB Streams are real-time. You get a new event as soon as records change. And that's why my Slack notifications are noisy.
After you vote, there are follow-up questions. Each answer saves to the database, updates your vote, and triggers an event.
But I can't know when you're done! Will you vote and bail or answer 3 questions? Don't know can't know.
Keep that in mind when you build event-based systems ✌️
Cheers,
~Swizec
PS: if you're curious about serverless, consider grabbing a copy of Serverless Handbook, it's a great resource :)
Want to become a Serverless and modern Backend.
I've been building web backends since ~2004 when they were just called websites. With these curated essays I want to share the hard lessons learned. Leave your email and get the Serverless and Modern Backend email series.
Curated Serverless & Backend Essays
Get a series of curated essays on Serverless and modern Backend.️ | https://swizec.com/blog/using-dynamodb-streams-with-the-serverless-framework/ | CC-MAIN-2022-27 | refinedweb | 1,372 | 56.15 |
When we are processing pdf files with python, we should check a pdf is completed or corrupted. In this tutorial, we will introduce you a simple way to how to detect. You can use this tutorial example in your application.
Some features of completed pdf files
PPF file 1.
The pdf file ends with NUL. Meanwhile, there are many NUL in last line.
The last second line contains: %%EOF
At the middle of this pdf file, there are also a %%EOF.
PDF file 2.
This pdf file ends with NUL, there are only a NUL in the last line.
The last second line also contains a %%EOF.
PDF file 3.
The pdf file ends with unknown symbol. However, the last second line contains a %%EOF.
PDF file 4.
This pdf file ends with %%EOF.
Then check the start of pdf
PDF file 5.
This pdf start with: %PDF
So as to a completed pdf, the feature of it is:
1.The pdf file ends with %%EOF or NUL.
2.This file contain more than one %%EOF symbol.
3. The content of pdf file contains %PDF.
We can create a python function to detect a pdf file is completed or not.
def isFullPdf(f): end_content = '' start_content = '' size = os.path.getsize(f) if size < 1024: return False with open(f, 'rb') as fin: #start content fin.seek(0, 0) start_content = fin.read(1024) start_content = start_content.decode("ascii", 'ignore' ) fin.seek(-1024, 2) end_content = fin.read() end_content = end_content.decode("ascii", 'ignore' ) start_flag = False #%PDF if start_content.count('%PDF') > 0: start_flag = True if end_content.count('%%EOF') and start_flag > 0: return True eof = bytes([0]) eof = eof.decode("ascii") if end_content.endswith(eof) and start_flag: return True return False
I have test this function on more than 1,000 pdf files, it works well. | https://www.tutorialexample.com/a-simple-guide-to-python-detect-pdf-file-is-corrupted-or-incompleted-python-tutorial/ | CC-MAIN-2021-31 | refinedweb | 299 | 80.07 |
This is a story about a horrific blunder. Thankfully, no Bothans died bringing this information to you.
As I mentioned previously, I wrote a JSON parser to demonstrate how to write a real, live working Haskell program. I started by working off of the pseudo-BNF found on the JSON homepage. From the perspective of the JSON grammar, the constructs it deals with are objects (otherwise known as Maps, hashes, dicts or associative arrays), arrays, strings, numbers, and the three magic values true, false and null.
My first task was to create a data type that captures the values that can be expressed in this language:
With the datatype in place, I then started writing parsing functions to build objects, arrays, and so on. Pretty soon, I had a JSON parser that passed the validation tests.With the datatype in place, I then started writing parsing functions to build objects, arrays, and so on. Pretty soon, I had a JSON parser that passed the validation tests.data Value = String String
| Number Double
| Object (Map String Value)
| Array [Value]
| True
| False
| Null
deriving (Eq)
I used this piece of working Haskell code during my presentation, highlighting how all the parts worked together -- the parsers that returned specific kinds of Value types, those that returned String values, and so on.
Pretty soon I got tongue tied, talking about how Value was a type, and why String was a type in some contexts, and a data constructor for Value types in other contexts. And how Number wasn't a number, but a Value.
I'm surprised anyone managed to follow that code.
The problem, as I see it, is that I was so totally focused on the JSON domain that I didn't think about the Haskell domain. My type was called Value, because that's what it's called in the JSON grammar. It never occurred to me as I was writing the code that a type called Value is pretty silly. And, because types and functions are in separate namespaces, I never noticed that the data constructor for strings was called String.
Thankfully, the code was in my editor, so I changed things on the fly during the presentation to make these declarations more (ahem) sane:
I think that helped to clarify that String is a pre-defined type, and JsonString is a value constructor that returns something of type JsonValue.I think that helped to clarify that String is a pre-defined type, and JsonString is a value constructor that returns something of type JsonValue.data JsonValue = JsonString String
| JsonNumber Double
| JsonObject (Map String JsonValue)
| JsonArray [JsonValue]
| JsonTrue
| JsonFalse
| JsonNull
deriving (Show, Eq)
When I gave this presentation again a couple of weeks ago, the discussion around this JSON parser was much less confusing.
Lesson learned: let the compiler and another person read your code to check that it makes sense. ;-) | http://notes-on-haskell.blogspot.sg/2007/05/namespaces-confusion.html | CC-MAIN-2017-51 | refinedweb | 481 | 57.3 |
Wrong Time Zone
I've got the following code
QDateTime timestamp = QDateTime::currentDateTime(); timestamp.setTimeSpec(Qt::LocalTime); text = timestamp.toString("hh:mm:ss");
But the result shows UTC instead of my timezone. I'm using Win7 and I have the correct timezone listed. Any idea what the problem is?
- Chris Kawa Moderators
currentDateTimereturns time already in local time spec/zone, so the next line does nothing. Verify if the correct time zone/spec is used e.g. by printing:
qDebug() << timestamp.timeZone().displayName(timestamp) << timestamp.timeSpec();
Thanks for your reply.
timeZone() in timestamp.timeZone() doesn't come up and gives an error.
Hi and welcome to devnet,
build error or runtime error?
Could you post the error?
- Chris Kawa Moderators
Did you
#include <QTimeZone>?
For the future - if you get an error you should post what it says. We don't have a crystal ball you know ;)
This post is deleted! | https://forum.qt.io/topic/53659/wrong-time-zone | CC-MAIN-2017-51 | refinedweb | 151 | 63.15 |
- Table of Contents
- Table of Contents
- BackCover
- Microsoft Exchange Server 2003
- Foreword
- Preface
- Product names
- Omissions
- URLs
- Acknowledgments
- Chapter 1: A Brief History of Exchange
- 1.1 Exchange first generation
- 1.2 Exchange second generation
- 1.3 Exchange third generation
- 1.4 Deploying Exchange 2003
- 1.5 Some things that Microsoft still has to do
- 1.6 Moving on
- Chapter 2: Exchange and the Active Directory
- 2.1 The Active Directory
- 2.2 Preparing the Active Directory for Exchange
- 2.3 Active Directory replication
- 2.4 The Active Directory Connector
- 2.5 The LegacyExchangeDN attribute
- 2.6 DSAccess-Exchange s directory access component
- 2.7 Interaction between Global Catalogs and clients
- 2.8 Exchange and the Active Directory schema
- 2.9 Running Exchange in multiple forests
- 2.10 Active Directory tools
- Chapter 3: Exchange Basics
- 3.2 Access control
- 3.3 Administrative and routing groups
- 3.4 Mailboxes and user accounts
- 3.5 Distribution groups
- 3.6 Query-based distribution groups
- 3.7 Summarizing Exchange basics
- Chapter 4: Outlook-The Client
- 4.1 MAPI-Messaging Application Protocol
- 4.2 Making Outlook a better network client for Exchange
- 4.3 How many clients can I support at the end of a pipe?
- 4.4 Blocking client access
- 4.5 Junk mail processing
- 4.6 The Offline Address Book (OAB)
- 4.7 Freebusy information
- 4.8 Personal folders and offline folder files
- 4.9 Offline folder files
- 4.10 SCANPST-first aid for PSTs and OSTs
- 4.11 Working offline or online
- 4.12 Outlook command-line switches
- Chapter 5: Outlook Web Access
- 5.1 Second-generation OWA
- 5.2 The OWA architecture
- 5.3 Functionality: rich versus reach or premium and basic
- 5.4 Suppressing Web beacons and attachment handling
- 5.5 OWA administration
- 5.6 Exchange s URL namespace
- 5.7 Customizing OWA
- 5.8 OWA firewall access
- 5.9 OWA for all
- Chapter 6: Internet and Other Clients
- 6.1 IMAP4 clients
- 6.2 POP3 clients
- 6.3 LDAP directory access for IMAP4 and POP3 clients
- 6.4 Supporting Apple Macintosh
- 6.5 Supporting UNIX and Linux clients
- 6.6 Exchange Mobile Services
- 6.7 Pocket PC clients
- 6.8 Palm Pilots
- 6.9 Mobile BlackBerries
- 6.10 Sending messages without clients
- 6.11 Client licenses
- Chapter 7: The Store
- 7.1 Structure of the Store
- 7.2 Exchange ACID
- 7.3 EDB database structure
- 7.4 The streaming file
- 7.5 Transaction logs
- 7.6 Store partitioning
- 7.7 Managing storage groups
- 7.8 ESE database errors
- 7.9 Database utilities | http://flylib.com/books/en/4.389.1.1/1/ | CC-MAIN-2017-09 | refinedweb | 423 | 57.43 |
Get the highlights in your inbox every week.
Managing Python packages the right way
Managing Python packages the right way
Don't fall victim to the perils of Python package management.
Subscribe now
The Python Package Index (PyPI) indexes an amazing array of libraries and applications covering every use case imaginable. However, when it comes to installing and using these packages, newcomers often find themselves running into issues with missing permissions, incompatible library dependencies, and installations that break in surprising ways.The Zen of Python states: "There should be one—and preferably only one—obvious way to do it." This is certainly not always the case when it comes to installing Python packages. However, there are some tools and methods that can be considered best practices. Knowing these can help you pick the right tool for the right situation.
Installing applications system-wide
pip. Once all dependencies have been satisfied, it proceeds to install the requested package(s). This all happens globally, by default, installing everything onto the machine in a single, operating system-dependent location.
Python 3.7 looks for packages on an Arch Linux system in the following locations:
$ python3.7 -c "import sys; print('\n'.join(sys.path))"
/usr/lib/python37.zip
/usr/lib/python3.7
/usr/lib/python3.7/lib-dynload
/usr/lib/python3.7/site-packages
One problem with global installations is that only a single version of a package can be installed at one time for a given Python interpreter. This can cause issues when a package is a dependency of multiple libraries or applications, but they require different versions of this dependency. Even if things seem to be working fine, it is possible that upgrading the dependency (even accidentally while installing another package) will break these applications or libraries in the future.
Another potential issue is that most Unix-like distributions manage Python packages with the built-in package manager (dnf, apt, pacman, brew, and so on), and some of these tools install into a non-user-writeable location.
$ python3.7 -m pip install pytest
Collecting pytest
Downloading...
[...]
Installing collected packages: atomicwrites, pluggy, py, more-itertools, pytest
Could not install packages due to an EnvironmentError: [Error 13] Permission denied:
'/usr/lib/python3.7/site-packages/site-packages/atomicwrites-x.y.z.dist-info'
Consider using '--user' option or check the permissions.
$
This fails because we are running pip install as a non-root user and we don't have write permission to the site-packages directory.
You can technically get around this by running pip as a root (using the sudo command) or administrative user. However, one problem is that we just installed a bunch of Python packages into a location the Linux distribution's package manager owns, making its internal database and the installation inconsistent. This will likely cause issues anytime we try to install, upgrade, or remove any of these dependencies using the package manager.
As an example, let's try to install pytest again, but now using my system's package manager, pacman:
$ sudo pacman -S community/python-pytest
resolving dependencies...
looking for conflicting packages...
[...]
python-py: /usr/lib/site-packages/py/_pycache_/_metainfo.cpython-37.pyc exists in filesystem
python-py: /usr/lib/site-packages/py/_pycache_/_builtin.cpython-37.pyc exists in filesystem
python-py: /usr/lib/site-packages/py/_pycache_/_error.cpython-37.pyc exists in filesystem
Another potential issue is that an operating system can use Python for system tools, and we can easily break these by modifying Python packages outside the system package manager. This can result in an inoperable system, where restoring from a backup or a complete reinstallation is the only way to fix it.
sudo pip install: A bad idea
There is another reason why running pip install as root is a bad idea. To explain this, we first have to look at how Python libraries and applications are packaged.
Most Python libraries and applications today use setuptools as their build system. setuptools requires a setup.py file in the root of the project, which describes package metadata and can contain arbitrary Python code to customize the build process. When a package is installed from the source distribution, this file is executed to perform the installation and execute tasks like inspecting the system, building the package, etc.
Executing setup.py with root permissions means we can effectively open up the system to malicious code or bugs. This is a lot more likely than you might think. For example, in 2017, several packages were uploaded to PyPI with names resembling popular Python libraries. The uploaded code collected system and user information and uploaded it to a remote server. These packages were pulled shortly thereafter. However, these kinds of "typo-squatting" incidents can happen anytime since anyone can upload packages to PyPI and there is no review process to make sure the code doesn't do any harm.
The Python Software Foundation (PSF) recently announced that it will sponsor work to improve the security of PyPI. This should make it more difficult to carry out attacks such as "pytosquatting" and hopefully make this less of an issue in the future.
Security issues aside, sudo pip install won't solve all the dependency problems: you can still install only a single version of any given library, which means it's still easy to break applications this way.
Let's look at some better alternatives.
OS package managers
It is very likely that the "native" package manager we use on our OS of choice can also install Python packages. The question is: should we use pip, or apt, dnf, pacman, and so on?
The answer is: it depends.
pip is generally used to install packages directly from PyPI, and Python package authors usually upload their packages there. However, most package maintainers will not use PyPI, but instead take the source code from the source distribution (sdist) created by the author or a version control system (e.g., GitHub), apply patches if needed, and test and release the package for their respective platforms. Compared to the PyPI distribution model, this has pros and cons:
- Software maintained by native package managers is generally more stable and usually works better on the given platform (although this might not always be the case).
- This also means it takes extra work to package and test upstream Python code:
- The package selection is usually much smaller than what PyPI offers.
- Updates are slower and package managers will often ship much older versions.
If the package we want to use is available and we don't mind slightly older versions, the package manager offers a convenient and safe way to install Python packages. And, since these packages install system-wide, they are available to all users on the system. This also means that we can use them only if we have the required permissions to install packages on the system.
If we want to use something that is not available in the package manager's selection or is too old, or we simply don't have the necessary permissions to install packages, we can use pip instead.
User scheme installations
pip supports the "user scheme" mode introduced in Python 2.6. This allows for packages to be installed into a user-owned location. On Linux, this is typically ~/.local. Putting ~/.local/bin/ on our PATH will make it possible to have Python tools and scripts available at our fingertips and manage them without root privileges.
$ python3.7 -m pip install --user black
Collecting black
Using cached
[...]
Installing collected packages: click, toml, black
The scripts black and blackd are installed in '/home/tux/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed black-x.y click-x.y toml-x.y.z
$
However, this solution does not solve the issue if and when we need different versions of the same package.
Enter virtual environments
Virtual environments offer isolated Python package installations that can coexist independently on the same system. This offers the same benefits as user scheme installations, but it also allows the creation of self-contained Python installations where an application does not share dependencies with any other application. Virtualenv creates a directory that holds a self-contained Python installation, including the Python binary and essential tools for package management: setuptools, pip, and wheel.
Creating virtual environments
virtualenv is a third-party package, but Python 3.3 added the venv package to the standard library. As a result, we don't have to install anything to use virtual environments in modern versions of Python. We can simply use python3.7 -m venv <env_name> to create a new virtual environment.
After creating a new virtual environment, we must activate it by sourcing the activate script in the bin directory of the newly created environment. The activation script creates a new subshell and adds the bin directory to the PATH environment variable, enabling us to run binaries and scripts from this location. This means that this subshell will use python, pip, or any other tool installed in this location instead of the ones installed globally on the system.
$ python3.7 -m venv test-env
$ . ./test-env/bin/activate
(test-env) $
After this, any command we execute will use the Python installation inside the virtual environment. Let's install some packages.
(test-env)$ python3.7 -m pip install --user black
Collecting black
Using cached
[...]
Installing collected packages: click, toml, black
Successfully installed black-x.y click-x.y toml-x.y.z
(test-env) $
We can use black inside the virtual environment without any manual changes to the environment variables like PATH or PYTHONPATH.
(test-env) $ black --version
black, version x.y
(test-env) $ which black
/home/tux/test-env/bin/black
(test-env) $
When we are done with the virtual environment, we can simply deactivate it with the deactivate function.
(test-env) $ deactivate
$
Virtual environments can also be used without the activation script. Scripts installed in a venv will have their shebang line rewritten to use the Python interpreter inside the virtual environment. This way, we can execute the script from anywhere on the system using the full path to the script.
(test-env) $ head /home/tux/test-env/bin/black
#!/home/tux/test-env/bin/python3.7
# -*- coding: utf-8 -*-
import re
import sys
from black import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
(test-env) $
We can simply run ~/test-env/bin/black from anywhere on the system and it will work just fine.
It can be useful to add certain commonly used virtual environments to the PATH environment variable so we can quickly and easily use the scripts in them without typing out the full path:
export PATH=$PATH:~/test-env/bin
Now when we execute black, it will be picked up from the virtual environment (unless it appears somewhere else earlier on the PATH). Add this line to your shell's initialization file (e.g., ~/.bashrc) to have it automatically set in all new shells.
Virtual environments are very commonly used for Python development because each project gets its own environment where all library dependencies can be installed without interfering with the system installation.
I recommend checking out the virtualenvwrapper project, which can help simplify common virtualenv-based workflows.
What about Conda?
Conda is a package management tool that can install packages provided by Anaconda on the repo.continuum.io repository. It has become very popular, especially for data science. It offers an easy way to create and manage environments and install packages in them. One drawback compared to pip is that the package selection is much smaller.
A recipe for successful package management
- Never run sudo pip install.
- If you want to make a package available to all users of the machine, you have the right permissions, and the package is available, then use your distribution's package manager (apt, yum, pacman, brew, etc.).
- If you don't have root permissions or the OS package manager doesn't have the package you need, use pip install --user and add the user installation directory to the PATH environment variable.
- If you want multiple versions of the same library to coexist, to do Python development, or just to isolate dependencies for any other reason, use virtual environments.
This article was originally published in April 2019 and has been updated by the editor.
11 Comments, Register or Log in to post a comment.
Could you explain why you did not even mention pipenv?
I wanted to focus on the basics of package management in this article and pipenv is more geared towards managing development environments.
Thanks for this article as it was long overdue. I struggled for such a long time with system libraries vs my libraries in Ubuntu. It was an absolute nightmare which no one seemed to explain.
However I would say that pipenv is what made this much easier for me.
Here is a good article on how to use pipenv:
Thank you so much for not endorsing the dumpster fire that is pipenv
Pretty awesome that python3.7 supports this now. I personally still would prefer miniconda simply because I can install some packages without requiring make, gcc,...
Also, conda does also come with pip so you can install packages not in the conda repository thus giving the best of both worlds.
Hi,
We used pyenv[1] during the development of SOURCEdefender[2] as this enabled us to support all versions of Python on Linux, Windows, and macOS platforms. We even have it running on a Raspberry Pi too!
Also, using something like Dircmd[3] to automatically activate and deactivate a virtual environment as you enter or exit a venv folder is great too!
Thanks,
SOURCEdefender
---
1:
2:
3:
pyenv on windows? please elaborate. Python on windows very soon turns into a mess because when python is upgraded the venvs have different base python. I recreate all venvs from requirements.txt after upgrade.
To simplify common venv-based workflows you can use venvtools.
Venvtools made managing development virtual environment easier via series of command.
venvtools command similarly like git command. You just press [tab] twice for commands action (create, list, remove, activate, deactivate, goto) and environments auto completion.
Disclaimer: I’m the author of venvtools. Please let me know for any bug or improvements suggestion.
Interesting indeed! This article is helpful to understand diverse package managers, specifically for the python ecosystem.
Thanks for sharing
Select a particular version of a package/library to install inside an environment like this:
>> pip install =
i.e., pip install fuzzysearch=0.6.0
Why when I use pip to install a package, it will force the installed package (and associated packages) to call system libraries, while using conda, to install the same package, it can also check, and patch up if needed, the local installation? | https://opensource.com/article/19/4/managing-python-packages | CC-MAIN-2021-43 | refinedweb | 2,490 | 54.52 |
This is the 3rd in a 5-part series covering the basics of the SDK for Connector.
The ObjectProviders
Another critical piece of Connector is the ObjectProviders. The ObjectProviders allow Connector to update, delete, and create objects of a specific type in the systems being connected. For example, the SampleAdapter comes with a SampleCustomerObjectProvider that defines how to keep Customer objects in sync between two systems. This post is going to cover some topics relevant to the ObjectProviders, including an overview of the SampleCustomerObjectProvider, SampleUofMScheduleObjectProvider, the types of ObjectProviders, the query used in some methods of an ObjectProvider, and “using” statements.
This ObjectProvider illustrates how a mixed object provider (one that implements both IObjectReader and IObjectWriter interfaces) is setup. In this case, a Customer object could be both read and written from a system. This is also why there are lots of methods to be implemented in this kind of ObjectProvider.
This ObjectProvider illustrates a very simple example of an ObjectProvider that implements only the IObjectReader interface. Thus, it can only be read from a system. It is fairly simple and straightforward. This is a good starting point for ObjectProviders.
ObjectProviders can take three different forms: destination, source, and mixed. Destination ObjectProviders implement the IObjectWriter interface and can be used to write objects to a system. Source ObjectProviders implement the IObjectReader interface and can be used to read objects from a system. Finally, mixed ObjectProviders implement both the IObjectWriter and IObjectReader interfaces so that you can read and write this object from and to a system. Think of source and destination as one-way ObjectProviders and mixed as two-way ObjectProviders.
Some methods in the ObjectProviders that are provided by the IObjectReader interface take a parameter called “query” that is of type System.DateTime. This represents the last time that the integration service was run. So, the idea is to pickup any changes that have happened since the last time the integration service was run. You’ll notice in the sample code that inside methods like ReadDeletedObjectKeys(), we use the query DateTime object as the value for the LastModifiedDate attribute of the criteria. So, when we request all objects that have been changes since the last run of the integration service, we’ll receive them.
When you utilize the ObjectProvider templates, you may need to implement some of the following namespaces with a “using” statement:
You may not need to implement all of these namespaces for your specific implementation, but they may be required particularly if you are interacting with web services. | http://blogs.msdn.com/b/dynamicsconnector/archive/2011/08/25/objecdtproviders-in-connector-sdk-3rd-in-the-sdk-series.aspx | CC-MAIN-2015-27 | refinedweb | 422 | 53.1 |
Red component of the color.
The value ranges from 0 to 1.
no example available in JavaScript
//Attach this script to a GameObject with a Renderer attached to it //Use the sliders in Play Mode to change the Color of the GameObject's Material
using UnityEngine;
public class Example : MonoBehaviour { Renderer m_Renderer; //Set the Color to white to start off public Color color = Color.white;
void Start() { //Fetch the Renderer of the GameObject m_Renderer = GetComponent<Renderer>(); }
private void OnGUI() { //Sliders for the red, green and blue components of the Color color.r = GUI.HorizontalSlider(new Rect(0, 00, 100, 30), color.r, 0, 1); color.g = GUI.HorizontalSlider(new Rect(0, 40, 100, 30), color.g, 0, 1); color.b = GUI.HorizontalSlider(new Rect(0, 80, 100, 30), color.b, 0, 1);
//Set the Color of the GameObject's Material to the new Color m_Renderer.material.color = color; } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2017.3/Documentation/ScriptReference/Color-r.html | CC-MAIN-2022-05 | refinedweb | 159 | 51.34 |
Introduction :
This tutorial will show you how to reverse a number in Kotlin. We will learn two different ways to do that. The first method will use one loop and the second method will use a different approach to solve it. This program will take the number as an input from the user, calculate the reverse and print it to the user.
Method 1: Using a loop :
Create one new variable to hold the reverse number. Initialize it as 0. Using one loop, pick the rightmost digit of the number and add it to the reverse number. For example, if the given number is 123 :
- Initialize the reverse number variable as 0
- Pick 3 of 123 and convert it to 12. Add it to the reverse number. It will become 3.
- Pick 2, the reverse number will be 32.
- Pick 1, the reverse number will be 321.
Kotlin program :
import java.util.Scanner fun main(args: Array<string>) { // 1 var num: Int var reverse: Int = 0 val scanner = Scanner(System.`in`) // 2 print("Enter a number : ") num = scanner.nextInt() // 3 while (num != 0) { reverse = reverse * 10 + num % 10; num /= 10; } // 4 println("Reverse number is : $reverse") }
Explanation :
The commented numbers in the above program denote the step numbers below :
- Create two integer variables. num is for holding the user input number and reverse is for holding the reverse number. the scanner is a Scanner variable to read the user input number. Don’t forget to import java.util.Scanner if you are using a Scanner.
- Ask the user to enter a number. Read it using a scanner and store it in num.
- Run one while loop. This loop will run until the value of num is not equal to zero. Inside the loop, we are updating the value of reverse. On each step, it takes the rightmost digit of num and adds it to reverse. We are also removing the rightmost digit and updating the value of num.
- The loop will stop once the value of num is 0 i.e. all digits are added to reverse. Print the value of reverse to the user.
Sample Output :
Enter a number : 8374847 Reverse number is : 7484738 Enter a number : 2233224 Reverse number is : 4223322 Enter a number : 1222345 Reverse number is : 54322211
Method 2: Converting it to a string :
This is another way to reverse a number. Convert the number to a string and then reverse it. Kotlin provides toString() method to convert an integer to string and reversed() method to reverse a string. Note that the final reverse value will be string, not an integer.
import java.util.Scanner fun main(args: Array<string>) { var num: Int var reverse: String val scanner = Scanner(System.`in`) print("Enter a number : ") num = scanner.nextInt() reverse = num.toString().reversed() println("Reverse number is : $reverse") }
It will print outputs like below :
Enter a number : 12345 Reverse number is : 54321 Enter a number : 1009890 Reverse number is : 0989001
Note that with the first method, if you input any number with trailing zeros like 100, it will not print the zeroes at start for the reverse number, as it is an integer. For 100, it will print 1. But with the second method, it will print these zeroes as we are using string, not an integer. | https://www.codevscolor.com/kotlin-reverse-number | CC-MAIN-2021-10 | refinedweb | 550 | 75.4 |
OK, I now have a fix for this craziness:
Advertising
For those just starting out, it would be best to grab TDM's tdm-gcc-4.6.1 compiler here: You'll want a development MinGW distro that's compatible with the Haskell platform's included GCC 4.5.2 version. Hack your WxWidgets include\wx\dlimpexp.h file, line 37, resulting in: # elif defined(__GNUC__) /* __declspec could be used here too but let's use the native __attribute__ instead for clarity. */ # define WXEXPORT __attribute__((dllexport)) # define WXIMPORT __attribute__((dllimport)) # endif and build WxWidgets with: CPPFLAGS="-fno-keep-inline-dllexport" SHARED=1 BUILD=release You'll get quick starting, lower memory use Haskell binaries, after your WxHaskell install from cabal. Unregister any previous versions where needed: ghc-pkg unregister wx ghc-pkg unregister wxcore ghc-pkg unregister wxc ghc-pkg unregister wxdirect Install based on fresh WxWidgets build: cabal install wxdirect wxc wxcore wx Compile HelloWorld, run, then break for pizza and beer! On Sat, Jul 28, 2012 at 8:28 PM, Simon Peter Nicholls <si...@mintsource.org> wrote: > I'm seeing the slow startup issue for the c++ sample apps anyway, so I > think the issue is in the base wxwidgets 2.9(.4) gcc build. No > problems with Linux 2.9 or Windows wxPack 2.8.12. ------------------------------------------------------------------------------@lists.sourceforge.net | https://www.mail-archive.com/wxhaskell-users@lists.sourceforge.net/msg01178.html | CC-MAIN-2018-30 | refinedweb | 222 | 54.63 |
Once..
Next, we will add a few functions to one of your dragonfly modules to connect a WebDriver instance to your existing Chrome session:
from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By def create_driver(): global driver chrome_options = Options() chrome_options.experimental_options["debuggerAddress"] = "127.0.0.1:9222" driver = webdriver.Chrome(CHROME_DRIVER_PATH, chrome_options=chrome_options) def quit_driver(): global driver if driver: driver.quit() driver = None def test_driver(): driver.get('');
You will need to replace CHROME_DRIVER_PATH with the path of the ChromeDriver executable you downloaded.. Add the following function to your module:
def switch_to_active_tab(): tabs = json.load(urllib2.urlopen("")) # Chrome seems to order the tabs by when they were last updated, so we find # the first one that is not an extension. for tab in tabs: if not tab["url"].startswith("chrome-extension://"): active_tab = tab["id"] break for window in driver.window_handles: # ChromeDriver adds to the raw ID, so we just look for substring match. if active_tab in window: driver.switch_to_window(window); print "Switched to: " + driver.title.encode('ascii', 'backslashreplace') return
Now, try calling this function first thing in test_driver, and it should operate on the active tab. This technique doesn’t work perfectly when multiple windows are open, but it works most of the time. If you have a more robust solution, please let me know in the comments!
Navigating to Google isn’t terribly exciting, so let’s add something more useful. Add the following action class and voice binding:
class ClickElementAction(DynStrActionBase): def __init__(self, by, spec): DynStrActionBase.__init__(self, spec) self.by = by def _parse_spec(self, spec): return spec def _execute_events(self, events): switch_to_active_tab() element = driver.find_element(self.by, events) element.click() ... in your Chrome bindings ... "search bar": ClickElementAction(By.NAME, "q"),
This handy shortcut will let you focus the Google search bar from the Google search results page, making it easy to edit your query.
WebDriver provides several ways of finding an element on a webpage, so you can reuse this action to create a shortcut for nearly any button or link on any webpage. For example, here is a binding that lets you expand all the messages in a Gmail conversation:
"expand all": ClickElementAction(By.XPATH, "//*[@aria-label='Expand all']"),
Check out the WebDriver docs for more ways of locating an element..
7 thoughts on “Custom web commands with WebDriver”
Very interesting. I never thought of using a web driver to control a browser.
That said, the only common thing I really never found a simple keyboard shortcut or mouse click for is starting and stopping embedded YouTube videos.
Wait, I take that back. There was a code review system that I could never figure out how to select the line to comment on — it wanted a right-click on a non-hyperlink if I remember correctly.
Good idea, I just added that for the code review system I use for work 🙂
Hello James,
Thank you for your blog, it’s very interesting.
However, I’m very interested in connecting my selenium web driver directly to an existing chrome session (launched with–remote-debugging-port=9222 option). I’ve tried your code, but nothing happened for me…
Do you have a special version of selenium or google chrome ?
Any help would be appreciated.
Thx
Franck
Hi Franck!
I’m using standard Google Chrome and Python selenium. Try quitting out of chrome and then explicitly killing every chrome process that remains (typically the second step isn’t necessary though). Then start with the flag and try again. If it’s working you should be able to load from within Chrome. If that works but the Python code doesn’t, then it sounds like there is something wrong with the Python code. Here’s my up-to-date python code for this:
Hi James !!
Thank you for your fast answer.
Below my context:
* Google Chrome 48.0.2564.82
* Python 2.7.6
* ChromeDriver 2.20.353124
*#26~14.04.1-Ubuntu
Now, with your Git source code, it’s open a new Chrome window (I’m using Ubuntu) but nothing happened. Your program, on my computer, blocks on
driver = webdriver.Chrome(“/usr/bin/google-chrome”, chrome_options=chrome_options).
And yes, I can load on Chrome.
I think I’m cursed.
Thank you for your help.
Have a nice day.
Franck
Sorry for the slow response, apparently my email notifications were broken 🙁
What do you mean when you say you are running Ubuntu? Are you running Dragon on a virtual machine? If so you might have to configure port forwarding or similar in order to access the Chrome server.
Also, if you have a firewall, try turning that off first. | http://handsfreecoding.org/2015/02/21/custom-web-commands-with-webdriver/ | CC-MAIN-2018-34 | refinedweb | 779 | 59.4 |
How to: Pause a Windows Service (Visual Basic)
This example uses the ServiceController component to pause the IIS Admin service on the local computer.
This code example is also available as an IntelliSense code snippet. In the code snippet picker, it is located in Windows Operating System > Windows Services. For more information, see How to: Insert Snippets Into Your Code (Visual Basic).
This example requires:
A project reference to System.serviceprocess.dll.
Access to the members of the System.ServiceProcess namespace. Add an Imports statement if you are not fully qualifying member names in your code. For more information, see Imports Statement (.NET Namespace and Type).). | https://msdn.microsoft.com/en-us/library/a4s1c36s(v=vs.90).aspx | CC-MAIN-2018-13 | refinedweb | 106 | 51.44 |
This is the mail archive of the cygwin@cygwin.com mailing list for the Cygwin project.
Hi all, I'm new to Windows programming - so be gentle :) I'm trying to get a COM client up and running - but everytime I run the program I get a Seg fault. I've managed to find the problem and it seems to be associated with "rpcndr.h". Is there any special library that I need to include to make this work (I've pretty much included all the libraries in w32api directory - no success). Here's a sample program that will cause a failure: #include <iostream> #include <rpc.h> #include <rpcndr.h> using namespace std; int main (int argc, char **argv) { cout << "Made it!" << endl; return EXIT_SUCCESS; } It compiles and links successfully using: > g++ -Wall -g -o test test.cpp When I run the program, I get the following: 0 [main] test 1208 open_stackdumpfile: Dumping stack trace to test.exe.stackdump Segmentation fault (core dumped) (it fails before the first line of the main()) test.exe.stackdump is: Exception: STATUS_ACCESS_VIOLATION at eip=00000000 eax=0A0103B0 ebx=0A0103B0 ecx=61095A9C edx=00000000 esi=00433D74 edi=004332E0 ebp=0022FD44 esp=0022FCE8 program=f:\hack\thunder\tmp\test.exe cs=001B ds=0023 es=0023 fs=003B gs=0000 ss=0023 Stack trace: Frame Function Args End of stack trace Commenting out rpcndr.h - all works fine. But since this file is been included elsewhere (in my main program - not this test sample shown here) - I don't have that luxury. Any clues on how this problem can be circumvented? BTW: I'm using gcc v3.0 thanks heaps Dave ps: there was no core dump to speak of even though the Seg fault eluded to one being dumped? -- Unsubscribe info: Bug reporting: Documentation: FAQ: | http://www.cygwin.com/ml/cygwin/2001-09/msg00045.html | crawl-002 | refinedweb | 300 | 74.49 |
Flutter apps – starting work with the new Google cross-platform framework
Up until very recently Flutter was just another novelty, an upcoming new player in cross-platform application development. As a seasoned iOS dev, I didn’t bother to give it much thought. Few months into the release – everyone seems to be talking about it. “Everything is a widget”’ “Beautiful native apps in record time” and a few other buzz phrases seemed to give Google’s creation a pretty good marketing head start. Naturally, our company, always on top of the latest trends, jumped right at it and started working on setting up a Flutter team. Here’s a rundown of my first serious attempt at using Flutter’s DART and widget based framework.
Table of contents
- First step – Installation
- The tutorials
- First Flutter app – concept
- Getting started
- Working with code – analyzing examples
- One small detail
- Fabric/Crashlytics integration
- Running Continuous Integration
- Flutter-friendly design pattern – why BLoC?
- The implementation
First step – Installation
Setting up Flutter on MacOS is quite trivial and the official Flutter website explains the macOS install process step-by-step Being a Swift developer, I had Xcode and Xcode command line tools already on-board, which made the process even quicker.
I did have to install Android Studio (Visual Studio Code can be used as well) to be able to code with Dart – Flutter’s language. I also had to add ADB to have my connected android device(s) visible. For the IDE I decided to try out Android Studio as I used to write some “hello world” apps for Android back in the day, so regardless of changes to-date, the environment was familiar at least.
All in all, the setup took me about two hours, and I was able to create a simple demo Flutter project and run it on physical Android and IOS devices connected to my Mac via USB.
The tutorials
Flutter official website is rich with code samples for beginners, which explain how proper working with Dart should look like. My first reaction was a bit of a panic, as what I saw was very different to what I’m used to. I didn’t “feel” the code, there were no classes, methods or variables in plain sight – sure, they were there, but the syntax differed significantly from what I’m used to in Swift. If you enjoy looking into new programming languages, I suppose you know that I’m talking about – everything seemed a bit odd – a different editor, different color -coding, and obviously – a whole different language to figure out. I’m pretty sure all I need is to take Dart for a longer spin to get a more familiar feel for all the visual code.
During tutorials reading/executing I have learned how to setup views using basic Flutter widgets – for placing buttons and labels, opening next view etc. It’s all done from the code, so for now a simple way to separate view’s code from the business logic is still on my to-do list.
First Flutter app – concept
Before starting to write the project, it’s necessary to do some research on the basics. For a Flutter initiate, the amount of information to absorb is quite significant, but you are not left to fend for yourself. The fast-growing Flutter Society offers a lot of support, and wherever the official Flutter page is lacking, you can turn to git repos, bugs lists, or Stackoverflow. From the sheer amount of Flutter-related topics and issues, you can see the already huge and constantly growing interest in Flutter/Dart.
For my first project I needed to find out a bit about:
- async get/post
- login form
- user profile with history
- images processing form
- playing sounds
- showing alerts
With the help of all the aforementioned go-to places, this particular bit was a piece-of-cake.
Getting started
So much to do and think about :). Time for the next step – starting some basic project work preparations. First things first – changing the name of the project – this turned out to be not much of an effort with the straightforward instructions from Flutter’s site. Kudos go to the Flutter team here – we all know what a chore project renaming can be on different platforms, so Flutter shines here as in the framework, renaming is no big deal at all. Same goes for changing package name with plenty of info to be found online.
Few more things to consider were: Android Manifest, Info.plist in Xcode, setting bundle.using flavours, Signing, Runner.workspace. Setting deployment target for IOS, for android. Yet again, extensive community and Google’s support were super helpful.
What I thought was a bit odd in Flutter was the long import syntax:
import ‘package:flutter_demo/model/user.dart’;
I prefer short imports, best if they only include the file name. My first impression on DART was that it’s quite lengthy. Instead of describing the job in a few simple words I felt like I was telling a detailed story about an action :). I will see what changes once I see and get a good feel for Dart constructs.
Working with code – analyzing examples
As a wise person once said “Learning programming language happens in two phases – learning to read the code and then to write the code” so a good place to start is with analyzing existing samples to get a proper feel of the Flutter platform. Seeing Abstract classes were present was a relief as I knew I’ll be able to do some inheritance and protocol-like SWIFT logic.
One small detail
Your app must be displayed in forced portrait view only – easy to figure out following this link
Fabric/Crashlytics integration.
Admittedly, this is where I had to fight my first serious Flutter battle. With Fabric, we’re currently in a bit of a situation, with it bought by Google and in the middle of being integrated with Firebase Crashlytics. It was not very clear to me which packages I should be adding to handle Fabric Beta distribution and Crashlytics error reporting. A little investigation on the side on Firebase Alpha replacing Fabric Beta for build distribution shows promise, but right now, with only Beta version available, hybrid access to both Fabric and Firebase is needed.
I had to create one Firebase Project for the app, create an IOS and Android app, transfer the GoogleService-Info.plist and google-services.json to platform project’s folders, perform necessary modifications in AndroidManifest.xml, build.gradle etc for Andro add Info.plist file and AppDelegate.m for the IOS. Then only the proper import in main.dart file and main() method modification to finally make Crashlytics work.
Running Continuous Integration
All this took the larger part of the day, but in the end my Continuous Integration was fully set up for my Flutter project, with the help of the fastlane for both IOS and Android platforms. There is no way to do it for Flutter directly or to create one fastlane setup for Flutter project to handle both platforms – you have to create separate Fastfiles for each platform and set them up independently. Here is a pretty good tutorial on automating continuous building. I have modified it slightly to let the builds go to straight to Crashlytics instead of iTunes or Google Play.
Configuration of the above is a bit of a handful – my advice is to stay on top of proper package versions, as it took me a lot of time to find the reason behind the “unexpected” compile errors.
I also had to deal with this error for Android:
> java.lang.RuntimeException: java.lang.RuntimeException: com.android.builder.dexing.DexArchiveMergerException: Unable to merge dex
This one was an easy fix after a quick search yielded the solution of adding multiDexEnabled true to defaultConfig of the build.gradle
Flutter-friendly design pattern – why BLoC?
As with any project, setting up the right architecture for running the app project is top priority. Researching both community and official recommendations, which all conclude that BLoC was the preferred option. I found this article helpful in understanding how BLoC architecture works. Turns out it’s very similar to MVVC for iOS, with the bloc class used instead of modelView. I figured BLoC architecture is the way to go and decided to work with it in my project.
The example app from the above article was quite similar to my own first flutter app project – all I needed was to add an additional project layer and separate content for particular modules. All it took was to expand basic folder list to all modules (the UI, blocks, models, resources). I also created a commonModels folder for keeping all models used in different modules in it. Some more in-depth read into the article resulted in a few slight changes to the code and refining the BLoC architecture for my project.
The implementation
Off to a rocky start, I found it really hard to create anything with DART code that would actually work – full respect to all who switch to the Flutter language with ease. For me, it was back to school on pretty much all aspects of writing code – one step forward, two steps back. All I knew was what I learned from step by step tutorials, examples, and some test debugging to see how establishing code breakpoints works.
In the end, I managed to create a small widget with 5 buttons defined by a designated model properties – not exactly enough to call in an application. Creating the UI didn’t go all that great either – all of a sudden DART, and the whole “everything is a widget” idea, started to remind JSON – just more complicated. I am still to familiarize myself with all UI parameter definitions – edges, dimensions, alignements etc… – I’m planning to get right on it once I’m done writing and analysing the code.
All in all I found working with Flutter somewhat challenging, but I can see that all the hype around is not baseless. With some refinements and in-depth learning, I think it is a very good addition to any software house’s tool set, and for myself, a great way to expand my capabilities as a programmer.
Read more:
What is blockchain? – a basic guide
What is a cloud application?
How much does it cost to make a booking type app?
19 Apps built with Flutter Framework
Flutter vs React Native – cross-platform frameworks comparison
Top 4 technologies to develop your app in 2020 | https://itcraftapps.com/blog/flutter-apps-first-steps/ | CC-MAIN-2020-24 | refinedweb | 1,763 | 59.23 |
Hi all
I am creating an application using sockets. Upon execution I want the app to immediately listen for a client socket, if it receives one, it accepts it and becomes the server. However, I am also going to provide the user a GUI with an IP and Port field and a Connect button so that if they choose to they can attempt to connect as a client. If the client connect succeeds the server listen dies. I am having trouble getting around the blocking behavior of ServerSocket.accept. I have packaged my entire SocketConnection into a easy to use class that has been tested and works without error. Does anyone have an idea as to how to implement this functionality into my class? Thanks.
import java.net.*; import java.util.NoSuchElementException; import java.io.*; public class SocketConnection implements Runnable { private boolean isHost = false; private Socket socket = null; private StringBuffer toSend = new StringBuffer(""); private StringBuffer toReceive = new StringBuffer(""); private BufferedReader in = null; private PrintWriter out = null; private final String TERMINATE_CODE = "!"; private boolean terminate = false; public boolean isHost() { return isHost; } public boolean isConnected() { return socket.isConnected(); } public boolean isTerminated() { return terminate; } public boolean isClosed() { return socket.isClosed(); } /** * Constructor for a client socket. Connects to the specified socket located at * the IP address provided and starts a new thread to send/receive data. * @param ip - address of the host computer * @param port - TCP port to be used * @throws IOException - Host found but failed to establish connection * @throws UnknownHostException - Specified host could not be found */ public SocketConnection(String ip, int port) throws UnknownHostException, IOException { isHost = false; socket = new Socket(ip, port); Thread t = new Thread(this); t.start(); } /** * Constructor for a server socket. Listens for and connects to a requesting socket * and starts a new thread to send/receive data. * @param port - TCP port to be used * @throws IOException - Failed to establish connection with requesting client */ public SocketConnection(int port) throws IOException { isHost = true; // Blocks until a client socket request is made ServerSocket hostSocket = new ServerSocket(port); socket = hostSocket.accept(); // Start new dedicated thread to handle send/receive data Thread t = new Thread(this); t.start(); } /** * To determine whether there is arriving data to be processed * @return - true if there is incoming socket data */ public boolean hasIncoming() { return (toReceive.length() > 0); } /** * Closes the open socket connection. */ public void closeConnection() { terminate = true; } /** * Gets the incoming data from the socket connection. If the data contains * more than one command, extracts the first command and leaves the rest in the buffer. * @return A string of the incoming data in text format excluding its newline char * @throws NoSuchElementException - when the incoming data buffer is empty */ public String receive() throws NoSuchElementException { synchronized (toReceive) { if (toReceive.length() == 0) { throw new NoSuchElementException("incoming data buffer is empty"); } String rest; int nextEndline = toReceive.indexOf("\n"); String s = toReceive.substring(0, nextEndline); if (nextEndline < toReceive.length() - 1) { rest = toReceive.substring(nextEndline + 1); toReceive.setLength(0); toReceive.append(rest); } else { toReceive.setLength(0); } return s; } } /** * Sends outgoing data via the socket connection. * @param s - String to be sent out via socket connection * @throws IOException - Failed to write data to socket output */ public void send(String s) throws IOException { synchronized (toSend) { toSend.append(s + "\n"); } } @Override /** * The dedicated thread of the SocketConnection object. Loops infinitely, * sending and receiving data from the socket in/out streams. */ public void run() { try { in = new BufferedReader(new InputStreamReader(socket.getInputStream())); out = new PrintWriter(socket.getOutputStream(), true); String s; while(!terminate || (terminate && ((toSend.length() != 0) || in.ready()))) { // Send data if (toSend.length() != 0) { s = toSend.toString(); out.print(s); out.flush(); if (s.contains(TERMINATE_CODE)) { terminate = true; } toSend.setLength(0); } // Receive data if (in.ready()) { s = in.readLine(); if ((s != null) && (s.length() != 0)) { // Check if it is the end of a transmission if (s.contains(TERMINATE_CODE)) { terminate = true; } // Otherwise, receive what text else { appendToReceive(s + "\n"); } } } } in.close(); out.close(); socket.close(); } catch (IOException e) { System.out.println("I/O stream error in socket thread"); System.exit(1); } } /** * Thread safe way to dump socket input stream data into a * StringBuffer outside of the running thread * @param s - input string to append to the StringBuffer */ private void appendToReceive(String s) { synchronized (toReceive) { toReceive.append(s); } } } | http://www.javaprogrammingforums.com/threads/8382-socket-threads.html | CC-MAIN-2015-27 | refinedweb | 696 | 50.12 |
Request For Commits – Episode #3
Measuring Success in Open Source
with Andrew Nesbitt & Arfon Smith. They also talked about individual project metrics, how we can measure success, what maintainers should be paying attention to, and whether or not GitHub stars really matter.
Andrew Nesbitt is the creator of Libraries.io, and Arfon Sm.
Transcript
I’m Nadia Eghbal…
And I’m Mikeal Rogers.
On today’s show, Mikeal and I talked with Andrew Nesbitt, creator of Libraries.io, and Arfon Smith,.
Our focus on today’s episode with Andrew and Arfon was around open source metrics and how to interpret data around dependencies and usage. We talked about what we currently can and cannot measure in today’s open source ecosystem.
We also got into individual project metrics. We talked with Andrew and Arfon about how we can measure success, what maintainers should be paying attention to and whether stars really matter.
Andrew, I’ll start with you. What made you wanna build Libraries.io? How was that informed by your GitHub Explore experiences, if at all?
I got a little bit frustrated working at GitHub on the Explore stuff. It was me kind of deprioritized whilst I was there, and my approach of libraries, rather than just build the same thing again outside of GitHub, was to use a different data source, which started at the package management level, and it turns out that’s actually a really good source of metric data, especially when you start looking into dependencies. If I had taken the approach of, “Let me look at GitHub repositories”, I would have gone down a very different path, I think.
Right. So tell me a little bit about that. So you pull out the whole dependency graph data - do you go into the kind of deep dependencies, or do you sort of stay at more of a top layer of first-order dependency data?
So for each project, it only pulls out the direct dependency. But as it picks up every project, because every time it finds anything that depends on anything else, it will go investigate that as well. It ends up having the full dependency tree, but right now I don’t have it stored in a way that makes it very easy to query in a transitive way, if that makes sense. I’ve been looking into putting the whole dataset into Neo4j - a graph database - to be able to do that easy transitive query, and to be able to give you the whole picture of any one library’s dependencies and their transitive dependencies, but it’s not quite at that point. But I do have all the data to be able to do it.
Interesting. Okay. So you said that this is a much more interesting way to go about this in the GitHub data. What’s something that you found when you started working with the dependency data that you never had in GitHub Explore, or just to the GitHub data?
GitHub stars don’t really give you a good indication of actual usage, and GitHub download data is only really accessible if you are a maintainer of a project, rather than just someone who’s looking at the project from a regular browser’s perspective. If you actually look at the dependency data and not just other libraries that depend on that particular library, but if you look at the whole ecosystem and how many, say, GitHub projects depend upon this particular package, it gives you a fairly good idea of how many people are still using that, still need that thing to be around so that code continues to work. And if there was a security vulnerability, you can see exactly how many projects may be affected. So actually end up connecting the dots between… And I’ve only looked at GitHub data so far; I haven’t got around to doing Bitbucket or arbitrary Git repositories.
[00:04:21.05] But you can actually use package management data to connect the dots between GitHub repositories as well. You can say, “Oh, well given this GitHub repository, how many other GitHub repositories depend on it through NPM or through RubyGems.
It’s good to hear that stars are useless, because I’ve also thought that. [laughs] That’s been my assessment, as well.
Yeah, I’ve [unintelligible 00:04:46.03] over how you shouldn’t judge a project by its GitHub stars. There’s one particular project that’s a great example of that, it’s called Volkswagen. It is essentially a monkey patch for your CI to make sure it always passes. I think it’s got something like 5,000 GitHub stars, and it’s maybe downloaded 50 times on NPM; it has zero usage.
Yeah, that’s by Thomas Watson. It was a joke when VW had that scandal where they were just passing all their tests, so he wrote a module called Volkswagen that just made all your tests pass, no matter what. [laughs] It’s brilliant… But yeah, utterly useless in terms of actual usage.
Yeah, and if you actually look at the stars… Of course, people have contributed to it, but even looking at contributed data doesn’t give you a good indication of actually is this a useful thing, a real thing, and should I care about it? I always look at GitHub stars as a way of… It’s kind of like a Hacker News upvote or a Reddit upvote, or a Facebook like. It just means like, “Oh, that’s neat!”, rather than “I’m actually using this” or “I used to use this five years ago.” No one ever un-stars anything either, whereas if people stop using a dependency, you actually see the amount of people that depend on a thing go down.
I think stars are an indication of attention at some point in time, and that is all we can say about them. So if you look at stars versus pageviews on a given repo, they correlate very well. So in defense of stars, we shouldn’t use them as “This is what people are using”, but they’re a good measure of some popularity, some metric. And I think that’s exactly what you just said, Andrew. Consider it like a Facebook like, or something like that. It’s got very little to do with how many people are actually using something at any point in time.
Yeah. I saw someone actually build a package manager; I think it was only a prototype, but I really hope it never actually became a thing, where it would pick the right GitHub repository if you just gave it the name rather than the owner and the name, by the thing that had the most stars, which sounded like a terrible idea at the time and completely gameable.
Yeah, that doesn’t sound like a good idea. You mentioned something interesting, which was you can understand how people use it in terms of just it being depended on. Recently GitHub did this new BigQuery thing, and one of the results is that you can do - RegEx has done the actual file content of a lot of this stuff, so you can start to look at which methods of a module people might use or how they might use it. Could you get into that a little bit?
Yeah, so just to refresh the data that we put into BigQuery, it’s basically not only the event data that comes out of the GitHub API, which is just “Something happened on this public repo” - and that’s what the GitHub archive has been collecting for a long time - this is actually in addition to that, the contents of the files and all the paths of the files for about 2.8 million repos, so anything with an open source license on GitHub basically that’s in a public repo.
[00:08:15.07] So that allows you to do things like if there’s a particularly - maybe a method call in your public API that you wanna try and measure the use of, then you can now actually go and look for people using that explicitly. So currently really complex kind of RegEx stuff on GitHub searches is pretty hard; in fact, I’m not sure you can do a RegEx query on GitHub search, so that’s one of the strengths of BigQuery, that you can actually construct these really complex, expensive queries, but then of course that gets distributed across the BigQuery framework, so it comes back in a reasonable amount of time.
For me I think the exciting thing about that… I think that’s really complementary to things like libraries that go and look at package managers - that’s incredibly useful, but I think not every language has a strong convention for the package management that they use, unfortunately, some of us forget that; I think we’re very fortunate in Ruby and JavaScript land, that there’s really good conventions there, which are really useful, so using dependencies is great. But for those cases where that isn’t an option, you can now actually go and look for telltale signs of your library being used, and maybe that’s because of an import statement or an actual method call.
Yeah, for languages like C, that’s pretty much the only way to do it. There’s just no convention there, other than the language itself. And then for some other package managers, you actually have to execute a file to be able to work out the things that it depends upon, which I avoid doing because I don’t really wanna run other people’s code just arbitrarily.
Well, in the NodeJS project we’ve been trying forever to really figure out how are people using some of these methods, because if we wanna, say, deprecate something, we’d really like to know how many people are using that in the wild and to which level is it depended on. But we’ve had several projects where we tried to pull all of the actual sources out of NPM and create some kind of parse graph and then figure out how that gets used… It’s just such a big undertaking that it hasn’t really happened. When this BigQuery stuff got released we were like, “Oh my god, how far can we get with the RegEx to figure out some of the stuff that’s used?” because that’d be really useful.
Yeah, it kind of makes me sad that we’ve made everyone write crazy RegExes, but sorry about that. Hopefully, that will be useful. [laughs] Hopefully a bunch of good stuff can be done; people are gonna have to level up their RegEx skills, I think.
Just for people who are newer to metrics world, why should they care to be blunt about this dataset being open and being on BigQuery? What are some things that you expect the world to be able to do with this data? Even outside of people like Mikeal with Node, but policy makers or researchers or anyone else.
One of the things I think is incredibly difficult right now for some people is to measure how much people are using their stuff. For a maintainer of an open source project maybe that’s not a huge problem, because you can go and look at things like libraries and see how many people are including your library as a dependency, or maybe you can just see how many forks and stars you’ve got of your project on GitHub, but I think there are some producers of software where actually reporting these numbers is incredibly important, and Nadia, you mentioned researchers. If I get money as an academic researcher from a federal agency like the National Science Foundation or the National Institute of Health, one of the really important things about getting money from these funders is you need to be able to report the impact of your work.
[00:12:26.16] It’s currently kind of hard to do that if you have your software only on GitHub and you don’t have any other way of measuring when people use the library. You don’t have any direct ways of doing that, other than just looking at the graphs that you have as the owner of the software on GitHub. So I’m excited about the possibility of people being able to just construct queries to go and look… Of course, only open source, public stuff is in this BigQuery dataset, but I think it offers at least a place where people can go and try and get some further insight into usage.
I think it’s actually a hard problem to solve, but I know there are some environments - I’m trying to think of some large institutional compute facilities, big HPC centers… People have done some work, doing some reporting on when something’s being installed or run, and actually Homebrew I think have started doing that recently as well, starting to capture these metrics. Because it’s really tough to know; not everything that people produce is open source, so it’s not even clear that everything’s out there and measurable and available. It’s really tough if you need good numbers to actually say, “Who’s using my stuff? Where are they?”, and there’s lots of very legitimate privacy concerns for collecting all of that data. So yeah, it’s a hard problem.
So for you coming from the academia world, have you gotten requests from people from the scientific community around using this type of data? Did those experiences help inform the genesis of this project at all?
Yeah, a little bit. Very early on when I joined GitHub I got some enquiries from people saying, “We’d love to get really, really rich metrics on how much stuff is being downloaded, where people are downloading from…” - all this stuff that you needed if you had to report and you wanted really rich metrics. Some of those data we just can’t serve in a responsible fashion. There’s no way we can tell you the username of every GitHub user of your software, that would be a gross violation of users’ privacy on our part. So there are things that we just can’t do.
The other things is - and I think this is a kind of a pretty sane standpoint for us to take - we take very seriously user support, so if somebody comes to me with a data request, it may be ethically possible for me to service that, and it might be technically possible for me to service that. But if it takes two weeks of my time to pull that data, then we’re not gonna help them with that problem, and that’s because we kind of believe that everybody… We should be able to service a thousand requests that are coming like that; we should be able to give uniformly the same level of quality support service to people, so we generally try and avoid doing special favors, if that makes sense, in terms of pulling data. So this is why making it a self-service thing, getting more data out in the community, making it possible for people to answer their own questions is a much more scalable approach to this problem.
[00:15:58.08] I think the next step for me personally with this data being published is to start to kind of show some examples of how it can be used to answer common support questions that we see. I think that’s kind of the obvious next step from my standpoint.
And Andrew, you’re in a position where you’re actually taking a bunch of public data that’s out there in all these different public ecosystems and then kind of mashing it together, so you’re like your own customer for this data. What are some of the interesting things that you’ve been looking at? What are some of the most interesting questions that you’ve been able to answer?
Unfortunately I didn’t have access to the BigQuery earlier, so I’ve been collecting it manually via the GitHub API for the past year and a bit, which takes a lot longer, but it also picks up all of the repositories that don’t have a license, which I guess often it’s probably best not to pull people’s code out if they have not given permission to do that.
Some of the things that I’ve been able to pull out and have been quite interesting is looking at not only the usage of package managers across different repositories, but the amount of repositories that use more than one package manager, or that use Bower and NPM, or RubyGems and NPM, and then looking at the total counts of those usages, as well as the number of lockfiles, which I found really interesting.
Coming from a time working with Rails before Bundler, it was incredibly painful sharing projects or coming back to projects and trying to reinstall the set of dependencies that all worked, given the transitive dependencies that move around all the time with new versions. And it looks like the Ruby community is pretty much… For every gemfile there was a gemfile.lock, whereas for the Node community, there’s maybe kind of five, ten thousand shrinkwrap files that I’ve found on GitHub on public projects, compared to the nine hundred thousand package.jsons, which in the short term won’t be a problem, but could potentially cause Node projects to be very hard to bring back to life if they’ve not been used in over a year. Because trying to rebuild that transitive dependency graph may be impossible - or it may be really easy, it’s hard to know. But it’s quite interesting to look at how different communities take how their “How reproducible can I make my software?”
I think we’re heading into the break right now… When we come back we’ll talk about the open source ecosystem.
[00:19:38.23] We’re back with Andrew from Libraries.io and Arfon from GitHub. In this segment I wanna talk about the broader open source ecosystem and the types of metrics that are and aren’t available to people, because I’ve heard a lot of confusion about “Well, why can’t we measure what is being measured right now?” and I think both of you together probably have a good handle on that. I want to start with talking about GitHub data, since that was mentioned earlier, around download data and stars and things like that. Are there any sort of myths that you wanna address around the types of things that GitHub actually does measure or doesn’t measure?
I don’t think so. I mean, I don’t know what myths there might be. I would love to hear things that you’ve heard that you would love to know if they’re true. I don’t know of any kind of whisperings of what GitHub might be doing, so I’m happy to respond to questions.
I hear a lot around just download data, and whether GitHub actually has the data and isn’t sharing enough of it, why not use download data in addition to stars as something that people can see…
Sure… Yeah, okay. So there is a difference between what you as a project owner can see about a GitHub project and you as a potential user of that software. So there are graphs with things like number of clones of the software, which is I think a good metric, there are graphs for showing how many pageviews your project got actually, like a mini Google Analytics. So anybody who owns a GitHub repository can see those graphs. They’re not exposed to the general public, and I would like them to be; I think they’re useful. I think we were kind of cautious initially when rolling those out, thinking that was the kind of information that is something maybe that’s only relevant or appropriate for the repository owner to see… I don’t know, I think that data is generally useful for people to be able to see if… Andrew, you’ve mentioned before just the idea there’s a package manager that tries to suggest the correct GitHub repository based on just a name, and it does that based on stars - that’s not great, but at the same time when you are looking for a piece of software to use, if it has a bunch of forks and a bunch of stars and a bunch of contributors, then that helps you inform your decision about what to use, even if you haven’t even looked at the code yet, right? Personally, I use that information to help inform my decision.
I seem to remember the metrics weren’t exposed because of some of the referrer data potentially leaking people’s internal CI systems.
Yeah, that might be possible. I’m not hugely familiar with exactly why the data isn’t exposed right now. I think it’s important to remember that we take user privacy very seriously, so the thing here is you wanna be on the right side of people’s expectations of privacy. There are things that GitHub could do that would surprise people - and not in a good way - and we don’t want that to happen. So you’re always gonna see us on the side of reducing the scope of who could see a particular thing. That said, I think consumption metrics, fork events - we used to expose downloads. I think one reason we don’t expose downloads anymore is we actually just changed the way that we capture that metric, and it’s not captured in a way that is designed to be served through like a production service. It’s in our analytics pipeline, but it’s not in a place where we could build an API around it, it’s just not performant enough to build those kind of endpoints.
[00:23:47.15] So yeah, we capture more information than we expose, but that’s just a routine part of running a web application and having a good engineering culture around measuring lots of things. The decision about what to further expose to the broad open source community or the public at large is largely one based on making sure that we’re in line with people’s expectations of privacy, but also just based on user feedback. So if the stuff that you would like to see presented more clearly, you should definitely get in touch with us about that, because we are responsive to things that come up as common feature requests. That’s a good way of giving us feedback.
I think also any metric has to be qualified, right? A lot of this talk about stars is that stars is not an indication of quality, it’s an indication of popularity at a point in time, like you said, but people take it as that because it’s the only data that they have.
An example is in NodeJS we have metrics for which operating system people are using, so we always put out two data points. One is the operating systems that have pulled downloads of Node, either the tarballs or the installers of some kind, and then we also have the actual market share for the NodeJS website, visitors to the website. And those are two ends of a very large spectrum in terms of machines that are running Node and people that are using Node.
One metric that is huge on the people end is Windows, and incredibly small on the actual computer end is Windows. But we do a lot to qualify those before we put them out, to set people’s expectations about them.
Yeah, and there’s another thing… I think the Python package index has a similar - like a badge you can put on your profile. And you see this, people will put it, the number of downloads last month from the Python package index, and it’s exactly the same problem. For a fast-moving project where they’re doing lots of CI builds it might be 50,000 downloads last month, or something, and you’re like, “Whoa, that’s crazy!” and then actually there’s not that many users, it’s actually the CI tools that are responsible for most of those.
Yeah, the problem with download metrics on packages too is that you also get into the dependency graph stuff, right? Downloads are really good at looking at the difference in popularity between something like Lodash and Request. They’re both very popular, but the difference in downloads gives you some kind of indication of the difference. But there’s also a dependency of requests that’s only depended on by three other packages, that has amazing download numbers because it’s depended on by Request, right?
Yeah, I have one of those, base62. I don’t think there are many projects that use it, but it gets like one and a half million downloads a month because React transitively depends upon it, so it’s downloaded by everyone all the time. But it never changes, it’s never really used. Lots of people reimplement it themselves.
That’s funny. There’s a lot of packages like that. The whole Leftpad debacle was people did not know that this was used by a thing that used a thing that used a thing. It wasn’t that popular of like a first-order dependency, it just happened to be in the graph of a couple really popular things.
That’s one reason why I haven’t started pull download stats for libraries, because you can’t compare across different package managers either, because the client may cache really aggressively. RubyGems really aggressively caches every package, whereas if people are kind of blasting away their Node modules folder whenever they want to reinstall things, then the numbers - you can’t even try to compare them across different package managers. If you’re looking for “I wanna find the best library to work with Redis, then download counts just muddy the waters, really.
[00:28:01.09] I think a lot of the metrics fall into that, though. When you start looking at them across ecosystems, they really don’t match up. The one that I think of comparing a lot is Go and NPM. GoDoc is actually like a documentation resource, it’s not really a package manager, but people essentially use the index of it as an indication of the count of total packages. But that’s really like about four times what the actual unique packages are, which is an interesting way to go, and it’s one things that just doesn’t map up with the way that NPM or PIP do it. Not that it’s invalid, it’s just measuring something different.
Yeah, the Go package manager is slightly strange because it’s so distributed. It’s just, give it a URL and that is the package that it will install, so basically every nested filed inside that package could be considered to be a separate thing, because it’s just a URL that points to a file of the internet, as opposed to something that has been explicitly published as a package manager to a repository somewhere.
I’d like to get into the human side of this, too. You’ve mentioned this a little bit earlier when you were talking about the difference between NPM and Ruby in terms of locking down your dependencies. That’s not enforced by the package manager, it’s just now a cultural norm to use Bundler and not NPM. Are there some other people differences that you see between Go and NPM because of those huge differences? Or any other packet manager, for that matter.
I’ve tried not to look too much into the people yet, partly because I didn’t wanna end up pulling a lot of data that could be used by recruiters, and make libraries a source of kind of horrible data that would abuse people’s privacy.
I didn’t mean like individuals, I meant like culturally. I didn’t mean like, “Be creepy.” [laughs]
[inaudible 00:29:55.07] all kinds of horrible things. Nothing springs to mind… I guess you can look at the average number of packages that a developer in a particular language or package manager would potentially publish more, or the size of different packages. Node obviously tends towards smaller things, or a lot more smaller things. There are still some big projects as well, but it’s a bit more spread around, whereas something like Java tends to have really large packages that would do a lot of things.
I haven’t done too much in comparing the different package managers from that perspective, because it felt like… As you said, you don’t get much mileage from going like “What this thing compared to this thing?” It’s much better to look at what packages can we highlight as interesting or important within a particular package manager and see if we can do something to support those and the people behind them; so looking at who are the key people inside the community, and then “Are they well supported? What can we do to encourage or to help them out more?” as opposed to trying to compare people across different languages.
You definitely see a certain amount of people who live in more than one language as well. It’s not often that there’s people that are just only doing one particular language.
I’m curious whether there’s - I don’t know a whole lot about this, but if there’s any way to standardize how package managers work across languages, or just standardize behavior somehow. Because I just sort of think for people that are coming for this from outside of open source, but are really curious of, for example, what are the most depended on libraries that we should be looking at and trying to support those people. It seems like it’s just really hard to count… Every language is different, every package manager is different.
[00:32:14.20] Yeah. I’ve standardized as much as possible with Libraries. The only way I could possibly collect so many things is to kind of go, “Let’s treat every package manager as basically the same, and if they don’t have a particular feature then that’s just ‘no’ for that particular package manager.” If you ignore the clients and the way the clients install things and just look at the central repositories that are storing essentially names of tarballs and versions, then it’s fairly easy to compare across them as when there is a central repository. Things like Bower and Go are a little bit more tricky because they don’t have that… You end up going like “Well, we’ll assume the GitHub repo is the central repository for this package manager”, which for Bower it is, but for Go it’s kind of spread all over the internet; it’s mostly GitHub, but there is things all over the place.
But you can then kind of go, “Okay, within a given package manager, show me the things that are highly depended on but only have one contributor, or have no license”, which is easy to pull out in Go, but then “Order by the number of people that depend on it or the number of releases that it’s had” to try and find the potential problems or the superstars inside of that particular community.
Right. I can see you kind of standardizing the data and some of the people work, but the actual technology - or even the encapsulation - you eventually hit the barrier of the actual module system itself, right? One of the reasons why Node is really good at this is because NPM was built and the Node module system was essentially rewritten in order to work better for NPM and better for packaging. So a lot of the enablement of the small modules is that two modules can depend on two conflicting versions of the same module, which you can’t do if you have a global namespace around the module system, which is the problem in Python, for instance.
So there’s a general trend I think towards everything getting smaller and packages are getting smaller, but some module systems actually don’t support that very well, and you’re hitting kind of a bottleneck there.
Yeah, I don’t think there are many other package managers other than NPM that allow you to run multiple versions of a package at the same time, and partly because of the danger of doing that, that you introduce potentially really subtle bugs in the process. But most of the package managers in the languages that at least I have an experience with will load the thing into a global namespace, or the resolver will make sure that it either resolves correctly to only have one particular version of a thing, or it will just throw its hands up and go “I can’t resolve this dependency tree.”
Yeah, it’s important to note that’s not part of NPM, it’s part of Node. Node’s resolution semantics enable you to do that; it’s not actually in NPM. NPM is just the vehicle by which these things get published and put together.
I think there’s been valiant efforts to make an installer and an NPM-like thing in Python, and they eventually hit this problem where you actually need to change the module system a bit.
Yeah, I made a shim for RubyGems once that essentially did that and it made a module of the name and the version, and then kind of hijacked the require in Ruby. It was a fun little experiment, but ends up being… You’re just fighting against everything else that already exists in the community. So you kind of wanna get in early before the community really gets going and starts building things, because once all that code is there it’s really hard to change.
[00:36:00.08] In that vein, have you seen any changes across these module systems as they’ve gone along? Have any really spiked in popularity or fallen? Are there changes that actually happen in these ecosystems once they get established?
Not so much. Elixir is making a few small changes, but it’s more around how they lock down their dependencies. Usually once there’s a few hundred packages - and often it’s because I guess there’s just not many maintainers that are actually working directly on the package managers; often they’re completely overwhelmed anyway to be able to keep up and be forward-thinking with a lot of this stuff. And I get the feeling that a lot of people are building their package manager for the first time and kind of don’t really learn the lessons of previous package managers. CPAN and Perl solved almost every problem a long time ago…
It’s true…
…and these package managers go round and eventually run into the same problems and solve the same things over again.
Related to that - I’m curious for both Andrew and Arfon - when we talked about looking at stars versus looking at downloads, and looking at projects that are trending or popular versus ones that are actually being used, for someone who’s trying to look through available projects and fair out which ones they should be using, how should they balance those two ideas? Because it sounds like once an ecosystem gets established then nothing really changes a whole lot, so you could make the argument that just because a lot of people are using a certain project doesn’t mean that you should also be using it. It could also encourage a different kind of behavior, whereas if you’re telling people only to look at the popular ones, then that encourages a behavior of doing, “I don’t know, maybe it’s not the best project.” So how do you balance - should we be looking at which one is trending or new or flashy, versus something that is older but everybody is using?
Yes, tricky one. I’ve been kind of intentionally avoiding highlighting the new, shiny things in package managers for the moment, and kind of not doing any newsletters of “Here are the latest and greatest things that have been published.” I think this mirrors my approach to software at the moment, which is to focus on actually shipping useful things to solve a problem, as opposed to following whatever the latest technology is.
But that’s just my point of view. There are lots of people who are looking for employment and want to be able to keep on top of whatever is currently the most likely to get them a job, which is a very different view of “What should I look at? What should I use?”
Something I really struggle with software in general, you often hear people saying, “Oh, this project should just die, because it’s not following modern development practices, or it’s just kind of hopeless and we should just focus on whatever is new.” I think it’s because it’s comparatively easier to do that with software infrastructure than it is with physical infrastructure; they can kind of just throw something away. But there’s a part of me that’s also like, “Well, maybe we should reinvest in things that are older but that everybody is still using.”
Yeah, and sometimes it’s a case of people very loudly saying, “I’m not gonna use this anymore”, whereas there are a number of people that are just using it and not telling anyone, just getting on with what they’re doing. They still require that stuff. Often you see companies will have their own private fork, or they’ll just keep their internal changes and improvements and never contribute them back, because they’re just solving their own particular problem.
[00:39:54.15] Right. I actually think this is one of the things where conventions can really help. I still recommend Rails to people who are getting into web development, because when you do Rails new, it comes with an opinion on what your web server should be, what your testing framework should be, what JavaScript libraries you should use. And there’s a set of reasonable norms, the current maybe flavor or opinion of the core Rails team, but that’s valuable. If you don’t know better, than actually picking what they recommend is completely fine. It’s not gonna trip you up.
I relatively recently started doing some Node stuff and I wanted to find a testing framework; I just wanted to write some tests, and I ended up going through about six in about five hours and it seemed by my assessment of what’s going on, the community was moving so quickly - three of the frameworks are all written by the same person. They clearly changed their opinion and had a preference about the way that they were going to now work, but I literally couldn’t get… It wasn’t a very satisfactory experience because things were moving so fast.
I consider myself reasonably technical and pretty good at using GitHub hopefully, and I found it hard to find a good set of defaults. I don’t know, I think finding the right thing, it’s…
It’s very similar in the browser at the moment. It’s hard to know - is this library the right thing anymore? I find myself going to, and I use DotCom to work out, like “Is this mirroring and API that now is a standard, or has it moved on?” because the browser has been evergreen mix, everything really hard to… And you can’t freeze anything in time anymore with anything that’s delivered to a browser, because Chrome is updating every day almost.
Yeah, I don’t know… The other thing is if you actually went out, stick your neck out and say “You should use these things” then somebody’s obviously gonna shout at you on the internet and say “You’re an idiot. You should use this thing.” I think it’s hard for the individual to have a strong preference and be public about that. It’s an unsolved problem, I think.
The scary thing to me is that there is no correlation that I can find between the health of a project and a popularity of a project.
Yes!
It’s totally fine if it’s not the coolest thing, but people are still working on it and it’s still maintained. But things actually die off and the maintainer leaves and it’s still popular and still out there, and still being heavily used because it’s that thing that people find. But as you said, that maintainer already moved on to a new project, didn’t hand it over to anybody, has a new testing framework that they’re working and doesn’t really care about this thing. So we don’t have a great way to surface that data or to just embed into the culture, like when you’re looking for something, look for health, and what does health mean to a project?
And making that argument to someone that… They might not care about the health, because they’re like, “Well, it’s popular and everyone’s using it.” I struggle with sort of like what is a good argument for saying “You should care about this” to a user.
Yeah, it’s a very long-term thing as well, because if you get an instant result and you can ship it and be done, you’re like “Oh, that’s fine, I don’t need to come back and look at this again”, whereas in six months, a year’s time you might come back to it and be like “Oh, I wish I didn’t do this.” But you have to be quite forward-thinking; especially as a beginner, that can be something that you just don’t consider, the long-term implications of bit-rot on your software.
Yeah, I feel like there was a thing relatively recently on Hacker News, like “Commiserations, you’ve now got a popular open source project”, or something like that. It was this really well-articulated overview of, so you publish something on GitHub; now a bunch of people are using it, and now you’ve got the overhead of maintaining it for all of these people that maybe you don’t really wanna help.
[00:44:06.17] For me that’s just a good demonstration of, you know, lots of people publish open source code, and they’re doing that because that’s just normal, or maybe they’re doing that because that’s the free side of GitHub, or whatever the reason is they’re doing that; or they’re solving probably their own problems - they were working on something because they were trying to solve a problem for themselves. If that then happens, to become incredibly popular, because that’s a useful thing and lots of people wanna use it, there’s no contract of “It’s my job now to help you.” There’s just conventions and social norms around what it looks like to be a good maintainer, but there’s no…
I think a lot of people who publish something that then becomes popular maybe don’t want to maintain it, or maybe don’t have the time to maintain it. Money helps, I think, but I think funding open source is hard; for lots of people it isn’t their day job to work on these things, and I think there’s not a good way yet - apart from the very large open source projects - of handing something off to a different bunch of people. I think that’s actually not very well solved for. You see Twitter do it with some of their large open source projects, they put them in the Apache Software Foundation, but that’s a whole different kind of scale of what it looks like to look after an open source project.
Nadia, you’ve written a bunch about this, I’m sure you’ve got a bunch of opinions on this as well.
Yeah.
I think that you’ve really highlighted the basis for the shift in open source, which is that we’ve gone to a more traditional peer production model. If you read anything from Clay Shirky about peer production, it’s like you publish first and then you filter, and the culture around how you filter and how you figure that out is actually the culture that defines what that peer production system looks like.
And in older open source, in order to get involved at all it was so hard, that you basically internalized all of that culture and then basically became a maintainer waiting in the wings, and that’s just not the world anymore.
People publish that have no interest in maintaining things at all, because everybody just publishes, that’s the culture now. I think we’re actually gonna come into a break now, but when we get back we’re gonna dive into what are those metrics of success, what are those metrics of health and how can we better define this.
[00:48:49.29] And we’re back. Alright, so let’s dive right into this. What are the metrics that we can use for success? How can we use this data to show what the health of an open source project might be and expose that to people? Let’s start with Arfon, since we have so many new metrics coming out of this new GitHub data.
Yeah, so I’ll start by not answering your question directly, if you don’t mind. One thing I would love to see is… There are things that I can do, and anybody who’s looked at an enough open source software… If you give somebody ten minutes, “Tell me if this project is doing well”, you can answer that question as a human, right? You can go and look at the repo, maybe you find out they have a Slack channel or discussion board, you go and see how active that is, you maybe go and look at how many releases there were, how many open issues there are, how many pull requests end up being responded to in the last three or four months… You can kind of take a look at a project and get a reasonable feeling for whether it’s doing well or not, and that I think is the project’s health. I think that’s what we can do as an experienced eye.
What that actually means in terms of heuristics, the ways in which we could codify that in terms of hard metrics, I think that’s a reasonably tough problem. I don’t think it’s impossible by any stretch, but it’s things like - we could make some up right now. Like, are there commits coming and landing in master? Are pull requests being merged? Are issues being responded to and closed? Another one I’m particularly interested in because I think this is pretty important for the story we tell ourselves about open source, the kind that anyone can contribute, “Are all the contributions coming from the core team, or are they coming from the outside of the core team?”
Yes, yes!
There’s one quote that calls this the ‘democracy of the project’. Is it actually - ‘meritocracy’ is a dirty word these days, but is it the community that’s contributing to this thing, or is it just three people who are actually rejecting the community’s contributions and are just working on their own stuff?
Is it participatory, right? Can people participate? That’s the question.
Yeah. How open is this collaboration, is the way I like to think of it. Because I think that’s the thing we tell ourselves, and that’s one of the reasons that I think open source is both a collaboration model and a set of licenses and ways to think about IP. For me, the most exciting thing about open source - and actually about GitHub - is that I think the way in which collaboration can happen is very exciting. You have permission to take a copy, do some work and propose a change, and then have that conversation happen in the open.
A lot of people do that, but they’re actually working in a very small team, or working together. Actually, a while ago I tried to measure some of this stuff on a few projects that I use, and you can see quite clearly that some projects are terrible at merging community contributions. They’re absolutely appalling at it. I can’t name names; some of them are incredibly popular languages.
You can name names.
I totally won’t, I’ll absolutely not. Some of them are very poor. But then actually, just to counter that, okay, so what does it mean if you are very bad at merging contributions? Maybe that means your API is really robust and your software is really stable, right? It’s not clear that being very conservative about merging pull requests is wrong, but it does mean that the community feels different. It does mean that the collaboration experience is [unintelligible 00:52:44.16]
That’s exactly what I wanted to tease apart a little bit. I just had a talk recently where I was looking at Rust versus Clojure and how both of those communities function, and they’re really different. Rust is super participatory and Clojure is more BDFL, but one can make the argument that both are still working, and Clojure really prioritizes stability over anything else, so that’s why they’re really careful about what they actually as contributions.
[00:53:10.28] So we talked about popularity of projects and then we’re talking now about health of projects, and it feels like two parts of it. One is around “Is this project active? Is it being actively worked on and being kept up to date?”, and you can look at contribution activity there. The other part is “Is it participatory or is it collaborative? Does the community itself look positive, healthy, welcoming?” But those are two pretty separate areas in my opinion.
Yeah, I’ve been looking into this a little bit as a way of… The libraries will sort all the search results by kind of a quality metric and try to filter any ones that it thinks is bad. One of the best metrics for that kind of thing… “Is this project dead?” isn’t really the activity in the commits, because if something is finished, especially if it’s really high-level… Like a JavaScript thing that has no external dependencies, it probably doesn’t need to change. So it doesn’t necessarily mean because it’s not been updated in a year it’s particularly bad, but the amount of the activity on the issue tracker, so if there’s actually… Like, “What’s the average time to response for a pull request or an issue?” is a really good indication of if there’s actually someone on the other side of that project that can help, that can merge those pull requests if need be. That may mean that the project doesn’t need anything happening, but at least the support requests are being listened to, and it gives you a good indication of if there was a security vulnerability found in this, would there be someone who could ship a new version? And the data in the package manager is for the number of users that are available, I guess is… There’s a lot of data that’s locked up in package managers that never gets out. Does the maintainer log in regularly? Are they even still around? Are they releasing anything? That would give you an indication of if that person is still there. Because that ends up being kind of a single point of failure. Often there would be lots of people with a commit bit on GitHub, but not necessarily the ability to publish that via whatever package manager, or even for the lower-level things, push out to something like
apt or
yum, which is an even smaller number of people for the project that could actually publish whatever changes were merged in, unless everyone is literally pulling from GitHub directly, which I don’t think most published software happens that way yet.
My prediction here is that the people and the organizations that are gonna solve this are gonna be the ones that are paying most attention to business users of open source. Because if you are a CIO and you’re thinking about starting to use open source more extensively in your organization, then assessing the risk of that in terms of maintenance and service agreements and understanding of whether a project is - if it does have a security vulnerability that’s likely to be patched… It’s useful to know in open source generally. “Should I use this library because it’s likely to see updates when Rails 5 is released?” or “When something happens, can I use my favorite framework with this, or my favorite tool? Is that likely to happen?” That’s useful to know, but it’s not business-critical. I think the people who really want a hard answer to this are more likely to be business consumers. That’s my prediction. I think there’s actually a lot of opportunity to do good stuff in this space.
[00:57:12.00] The Linux Foundation are a little bit around that with the Core Infrastructure Initiative, where they’re trying to see, “Has this project had a security review? When was the last time it was checked for the people that are behind the project?”, which I think is a harder thing to do automatically. You end up having to have a set of humans that go and contact other humans, which if those people are anything like me on email, it may take ages to get a response.
There’s a fair number of metrics that we can pull in automatically to give you a light indication of if the project is healthy. I guess you have to split it in half again and go like, “Well, what do I care about the project? Is this thing that I’m doing a throw-away fun experiment or a learning exercise, or is it something I’m gonna be putting into production?” Then you have to look at things with two very different sets of metrics.
I think the methodology that they used is somewhat applicable here though. I know a lot about the CII thing because I’m at the Linux Foundation. The NodeJS project was one of the first to get a security badge. Essentially what they did was they came up with “How do we do a really good survey on projects that are problematic? Do they have a security problem?” They asked some of the similar questions that we did, like “What makes a project healthy? How do we define that?” Then they went out and did this huge survey to identify all the projects that are having a problem. Later what they did was they turned all of those things into basically a badging program. There was a set of recommendations that you can do, and if you do all of these things, then you get the security badge.
The Node project was one of the launch partners of this. It’s really simple stuff, like have a private security list, have a documented disclosure policy, have that on a website somewhere. It sounds really basic, but the number of projects that are heavily depended on that don’t do that is surprisingly big. And just having a really basic set of things that people can go do that make people feel better about their software and are actually good for the health of the projects is like a really good set of recommendations that we can come up with, that would actually be based on metrics and some really good methodology.
I’m curious to kind of move this a little bit to thinking about analytics from a maintainer’s point of view. So if you’re a maintainer and you have a project, the project gets popular, what should they be measuring for their projects? What do you think they should be paying attention to at a high level?
Someone asked me a question the other day on Twitter… They were wondering for a given library that they were maintaining what were the versions of that library that people depended on. They wanted to see for the 500 other projects that depended on it what versions were they using, because they wanted to get an idea of which things could they deprecate. As Mikeal said earlier, we wanna know the actual pain points here and if people are stuck on an old version, and how can we move them forward, so that we can drop some old code or we can kind of clean up something that we don’t like anymore. That data is very easy to get, although trying to lump that in together with SemVer ranges ends up going like, “Oh, they depend on something around this version”, as opposed to something very specific.
[01:00:59.03] But having that actual usage data around the versions, which some package managers really give you the data of a particular download for a version as well, so you can see, “Oh, this thing looks completely dead. No one has downloaded this anymore”, as opposed to the last two releases that are really heavily downloaded. And you can get that data from RubyGems. I don’t think NPM has download data on a per-version basis, as least publicly available. For other smaller package managers it’s kind of all over the place, whereas at least on GitHub you can assume everyone is looking at the default branch.
Then also looking into the forks is something that maintainers might wanna do to be able to kind of go, “Oh, people are forking this off and changing things manually. They haven’t wanted to contribute back? Why didn’t they contribute back?” It definitely seems to me to come down to very human questions, as opposed to kind of like “What versions of Node are people running when they’re using my library?” It’s more kind of like, “How can I help these people either move forward onto a newer version, or what are the exceptions that they’re having that I never see?”
I was talking to the guy at Bugsnag, who do exception tracking, and they collect a lot of exception data that actually is thrown up by an open source library and they see it in the stack trace, like “Oh, this error has come from Rack”, for example, and they were investigating if they could use or at least ask for permission for users to report that error, exception tracking data, like “This line of your source code is causing lots of people lots of exceptions, for whatever reason”, which I thought was quite interesting. I don’t think they’ve actually got around to doing that yet, though.
Yeah, I’m also interested in the types of roles of people on your project, as well. One of the projects I maintain for GitHub is called Linguist, which is actually one of our more popular open source projects, and it does the language detection on GitHub; it’s kind of a somewhat self-serviced project, like if a new language springs up in the community and you want to GitHub to recognize it and maybe syntax-highlight it, then you need to come along and add that to Linguist. The longest time it’s been myself and one of the GitHubber merging pull requests, and we just realized that the rate at which the project was able to move from being responsive was actually really severely limited by our attention. So I went and looked at who made the best pull request and being most responsive on the project in the past 6-12 months and I actually just gave a couple of those people commit rights to master.
We’ve got a little bit of policy around who gets to do releases still, just because it’s kind of coupled to our production environment, but doing that has just breathed new life into the project, and I think one of the things that was not straightforward, but you can get it from the pulls page, to see who’s got the most commits to master in the last year or two… Paying attention to who’s active on your project and then thinking about their role - it’s not the kind of hard metric, but thinking about who’s around and who actually really understands and cares about the project, has been contributing… I don’t know, I’m just reflecting on that; it’s only a few weeks that we’ve been doing it, but it’s been really successful so far, and has really put a shot in the arm in terms of energy of the project.
[01:05:02.19] My approach with open source projects I maintain like that is based off a Felix Geisendörfer’s blog post, which was I guess a couple years ago. He basically just goes, “If someone sends me a progress, I’m just gonna add him as a contributor. Because what’s the worst that could happen? If they merge something I don’t like, then I can just back it out.” And later on maybe give them release rights when they’ve kind of proved themselves a little bit that they’re not gonna go crazy… Which seems to work really well, so you get a lot more initial contributions, and those people might not stay around very long, but you see a spike in activity.
And that really developed in the Node community, too. Eventually, that turned into open-open source and more liberal contribution agreements. It’s really the basis now for Node’s core policies as well. There’s been a lot of iteration there on how you liberalize access and commit rights and stuff like that.
It’s been quite interesting to have GitHub actually go like, “Oh, this is the third pull request you’ve received from this person. You should consider adding them as a collaborator so they can do this themselves.”
Yeah, that’d be awesome.
In the Node project we do a roll-up every month just to show, “Okay, these are the people that merge a lot of stuff”, and then there’s a note next to them if they’re a committer or not, so that they can get onboarded if they’re not. That’s how we base the nominations.
If that was automatically integrated into GitHub it would save me so much time… Not having to run those scripts and post those issues, it would be fantastic.
I think Ruby on Rails runs a leader board as well of the total number of commits into any of the rails projects, and you can kind of see a little star next to the ones who are currently Rails core. It kind of gamifies it a little bit, which I don’t know if that’s a good thing or not. I guess as long as it’s people actually doing stuff for the contributions rather than just to get up the leader board…
I think it’d be cool to see that for other types of contributions too, like people that are really active in issues or people that are doing a lot of triaging work, or whatever. I hear that from people, of “Well, I also wanna recognize all these other people that are falling through the cracks or that we don’t always see.”
Right, yeah. We did this blog post recently called “The Shape Of Open Source” that kind of just shows really clearly the difference between the types of activities around a project as the contributor pool grows. You can see that the lion’s share of the activity goes from commits if it’s just a solo project to actually comments on code, and pull requests to actual code review, but then just comments on pull requests and issues, and replies to those issues. It just demonstrates the project’s kind of transitioned to… A lot of it becomes user support, and that’s a ton of work and it’s something that I think what that contributor role is. There’s been some nice thinking going around that, but I don’t think it’s yet kind of baked itself into changes in the way products like GitHub actually work.
Well, to wind this down a little bit and look more towards the future - are there any trends like that that you see actually growing over time? I’ll ask this to both of you… We’ve talked a lot about what the data looks like right now. If you look at the data now, compared to last year or compared to the year before, what are the biggest growth areas in terms of what this data looks like?
[01:08:47.15] Well, for me there’s an accelerating number of packages everywhere, across every package manager that is in a language that is still very active. Perl is slowed down a little bit, but most package managers seem to continue to gain more and more code. There’s just more choice and more software to keep track of and to choose which things you should use. There’s never just like, “Oh, there’s the one obvious choice for this thing.” It feels like it’s reaching a point where… The internet happened 10-15 years ago, where the Yahoo! curated homepage was no longer useful because they couldn’t keep up with the amount of things that they were putting in. We have the equivalent in awesome lists where people are manually adding stuff. It’s kind of like the Yahoo! Directory of the internet, whereas you need something like Google to come along and go, “Actually, here’s the things that are gonna solve your problems.”
The dependency graph does give you something like a page rank to be able to go, “If we used a combination of links to that…”, either the GitHub page or the NPM page, and dependencies from actual software projects, you would then have a good picture of the things that are the most considered to be useful. Which is something that I’ve tried to put in, but there’s a huge amount of work to keep on top of and to build at essentially Google again, but for software.
Right. Clay Shirky has been mentioned once already on this today, but let’s mention him again - he’s like, “The problem is filter failure, not information overload.” I think currently a lot of what we’ve talked about today, it’s like it’s hard to find the right thing, because the volume of open source software is growing exponentially.
I think it’s almost becoming standard to hear some of these conversations happen. Now people are like, “Yeah, but how can we measure health? How can we know whether a project is doing well?” How is the data changing? I don’t know that the data is changing necessarily that much; I think Homebrew’s adding those metrics to capture usage, I think that’s a really good step in the right direction.
Some of this is there’s data missing that we don’t necessarily have, and it will be better to have more explicit measure of consumption in the use of open source.
I think the other part of it, the biggest change that I’m seeing is that the conversation is moving pretty fast, and that to me speaks of a demand and a better understanding of the problem generally in the community, and I think that means that we’re likely to see product changes and improvements that help solve some of the really common issues for people.
[01:11:59.23] Can’t wait!
That’s great, I’m excited!
There’s a lot of people working on that kind of area as well. Did you see the Software Heritage project that was released yesterday?
Yes, yes.
So far they’re just collecting stuff, but building those kinds of tools on top of all of that, like the internet archive of software, could be a really powerful way for collecting those metrics and making them distributed out and allowing people to do interesting things on top of them
Yes.
I think we’ll leave it there. Thank you all for coming on, this was amazing.
Thanks for the conversation.
Thanks very much.
Our transcripts are open source on GitHub. Improvements are welcome. 💚 | https://changelog.com/rfc/3 | CC-MAIN-2020-45 | refinedweb | 11,742 | 63.12 |
Possible bug in editor module, when used via button callback
I'm seeing a difference and possible bug in the behavior of the editor module. (using version 1.5)
I want to read a file's contents. It works when running a basic script, but not when triggered by a button press.
Spent awhile on this, I believe this is the simplest path to reproduce:
- Create two scripts, one called "ReadMe.py" and one called "RunMe.py"
- Put some junk text in ReadMe
- Open RunMe and add a UI (calling the necessary load_view and present)
- Create a method called open_script, using the code below
- Create a button
- Set the action of the button to be open_script
- Run the UI, click the button
Code:
from console import hud_alert def open_script(sender): editor.open_file('ReadMe.py') time.sleep(0.5) # give time for file to be loaded hud_alert(editor.get_path()) hud_alert(editor.get_text())
You'll see the path and first line of RunMe rather than ReadMe. Although when you close the UI, you'll be on the ReadMe script.
But if you run the logic of open_script just on its own, not through the UI, everything works as expected.
Please advise a workaround or correct my misunderstanding?
Thank you for Pythonista!
Try:
@ui.in_background def open_script(sender):
Brilliant, that worked, thank you!
To perform a replace in that background file, I still have to leave the time.sleep() in (with a sleep less than 0.3 typically nothing happens)... Is there a better way to do that as well?
To perform a replace in that background file, I still have to leave the time.sleep() in (with a sleep less than 0.3 typically nothing happens)... Is there a better way to do that as well?
There isn't a better way right now. The editor opens files after a brief delay, which is to prevent iOS from potentially killing the app if it takes too long to launch... I'll probably change
editor.open_filein the future to return after the file has been completely loaded, but for now you'll have to live with the
sleepworkaround.
Makes complete sense, thank you. | https://forum.omz-software.com/topic/1504/possible-bug-in-editor-module-when-used-via-button-callback | CC-MAIN-2020-40 | refinedweb | 360 | 75.91 |
Dear diary, on Fri, Jul 01, 2005 at 03:56:06PM CEST, I got a letter where "Eric W. Biederman" <ebiederm@xmission.com> told me that... > "H. Peter Anvin" <hpa@zytor.com> writes: > > > In the end, it might be that the right thing to do for git on kernel.org is to > > have a single, unified object store which isn't accessible by anything other > > than git-specific protocols. There would have to be some way of dealing with, > > for example, conflicting tags that apply to different repositories, though. > > As far as I can tell public distributed tags are not that hard and if > you are going to be synching them it is probably worth working on. > > The basic idea is that instead of having one global tag of > 'linux-2.6.13-rc1' you have a global tag of > 'torvalds@osdl.org/linux-2.6.13-rc1'. > > The important part is that the tag namespace is made hierarchical > with at least 2 levels. Where the top level is a globally > unique tag owner id and the bottom level is the actual tag. This > prevents collisions when merging trees because two peoples > tags are never in the same namespace, as least when > people are not actively hostile :) I don't know, I don't consider this very appealing myself. I'd rather prefer the private tags to be per-repository rather than per-user, since those ugly "merged-here", "broken" etc. tags aren't very useful on larger scope than of a repository. OTOH, what tags would be per-user, not per-repository and not global? -- Sat Jul 02 04:12:15 2005
This archive was generated by hypermail 2.1.8 : 2005-07-02 04:12:16 EST | http://www.gelato.unsw.edu.au/archives/git/0507/5928.html | CC-MAIN-2015-18 | refinedweb | 289 | 64.91 |
In the last post we covered how to setup a Docker image to cope with the prospect of a random user ID being used when the Docker container was started. The discussion so far has though only dealt with the issue of ensuring file system access permissions were set correctly to allow the original default user, as well as the random user ID being used, to update files.
A remaining issue of concern was the fact that when a random user ID is used which doesn’t correspond to an actual user account, that UNIX tools such as ‘whoami’ will not return valid results.
I have no name!@5a72c002aefb:/notebooks$ whoami
whoami: cannot find name for user ID 10000
Up to this point this didn’t actually appear to prevent our IPython Notebook application working, but it does leave the prospect that subtle problems could arise when we start actually using IPython to do more serious work.
Lets dig in and see what this failure equates to in the context of a Python application.
Accessing user information
If we are writing Python code, there are a couple of ways using the Python standard library that we could determine the login name for the current user.
The first way is to use the ‘getuser()’ function found in the ‘getpass’ module.
import getpass
name = getpass.getuser()
If we use this from an IPython notebook when a random user ID has been assigned to the Docker container, like how ‘whoami’ fails, this will also fail.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-3a0a5fbe1d4e> in <module>()
1 import getpass
----> 2 name = getpass.getuser()/usr/lib/python2.7/getpass.pyc in getuser()
156 # If this fails, the exception will "explain" why
157 import pwd
--> 158 return pwd.getpwuid(os.getuid())[0]
159
160 # Bind the name getpass to the appropriate functionKeyError: 'getpwuid(): uid not found: 10000'
The error details and traceback displayed here actually indicate the second way of getting access to the login name. In fact the ‘getuser()’ function is just a high level wrapper around a lower level function for accessing user information from the system user database.
We could therefore also have written:
import pwd, os
name = pwd.getpwuid(os.getuid())[0]
Or being more verbose to make it more obvious what is going on:
import pwd, os
name = pwd.getpwuid(os.getuid()).pw_name
Either way, this is still going to fail where the current user ID doesn’t match a valid user in the system user database.
Environment variable overrides
You may be thinking, why bother with the ‘getuser()’ function if one could use ‘pwd.getpwuid()’ directly. Well it turns out that ‘getuser()’ does a bit more than just act as a proxy for calling ‘pwd.getpwuid()’. What it actually does is first consult various environment variables which identify the login name for the current user.
def getuser():
"""Get the username from the environment or password database.First try various environment variables, then the password
database. This works on Windows as long as USERNAME is set."""import osfor name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'):
user = os.environ.get(name)
if user:
return user# If this fails, the exception will "explain" why
import pwd
return pwd.getpwuid(os.getuid())[0]
These environment variables such as ‘LOGNAME’ and ‘USER’ would normally be set by the login shell for a user. When using Docker though, a login shell isn’t used and so they are not set.
For the ‘getuser()’ function at least, we can therefore get it working by ensuring that as part of the Docker image build, we set one or more of these environment variables. Typically both the ‘LOGNAME’ and ‘USER’ environment variables are set, so lets do that.
ENV LOGNAME=ipython
ENV USER=ipython
Rebuilding our Docker image with this addition to the ‘Dockerfile’ and trying ‘getuser()’ again from within a IPython Notebook and it does indeed now work.
Overriding user system wide
This change may help allow more code to execute without problems, but if code directly accesses the system user database using ‘pwd.getpwuid()’, if it doesn’t catch the ‘KeyError’ exception and handle missing user information you will still have problems.
So although this is still a worthwhile change in its own right, just in case something may want to consult ‘LOGNAME’ and ‘USER’ environment variables which would normally be set by the login shell, such as ‘getuser()’, it does not help with ‘pwd.getpwuid()’ nor UNIX tools such as ‘whoami’.
To be able to implement a solution for this wider use case gets a bit more tricky as we need to solve the issue for UNIX tools, or for that matter, any C level application code which uses the ‘getpwuid()’ function in the system C libraries.
The only way one can achieve this though is through substituting the system C libraries, or at least overriding the behaviour of key C library functions. This may sound impossible but by using a Linux capability to forcibly preload a shared library into executing processes it is actually possible and someone has even written a package we can use for this purpose.
The nss_wrapper library
The package in question is one called ‘nss_wrapper'. The library provides a wrapper for the user, group and hosts NSS API. Using nss_wrapper it is possible to define your own ‘passwd' and ‘group' files which will then be consulted when needing to lookup user information.
One way in which this package is normally used is when doing testing and you need to run applications using a dynamic set of users and you don’t want to have to create real user accounts for them. This mirrors the situation we have where when using a random user ID we will not actually have a real user account.
The idea behind the library is that prior to starting up your application you would make copies of the system user and group database files and then edit any existing entries or add additional users as necessary. When starting your application you would then force it to preload a shared library which overrides the NSS API functions in the standard system libraries such that they consult the copies of the user and group database files.
The general steps therefore are something like:
ipython@3d0c5ea773a3:/tmp$ whoami
ipythonipython@3d0c5ea773a3:/tmp$ id
uid=1001(ipython) gid=0(root) groups=0(root)ipython@3d0c5ea773a3:/tmp$ echo "magic:x:1001:0:magic gecos:/home/ipython:/bin/bash" > passwdipython@3d0c5ea773a3:/tmp$ LD_PRELOAD=/usr/local/lib64/libnss_wrapper.so NSS_WRAPPER_PASSWD=passwd NSS_WRAPPER_GROUP=/etc/group id
uid=1001(magic) gid=0(root) groups=0(root)ipython@3d0c5ea773a3:/tmp$ LD_PRELOAD=/usr/local/lib64/libnss_wrapper.so NSS_WRAPPER_PASSWD=passwd NSS_WRAPPER_GROUP=/etc/group whoami
magic
To integrate the use of the ‘nss_wrapper’ package we need to do two things. The first is install the package and the second is to add a Docker entrypoint script which can generate a modified password database file and then ensure that the ‘libnss_wrapper.so’ shared library is forcibly preloaded for all processes subsequently run.
Installing the nss_wrapper library
At this point in time the ‘nss_wrapper’ library is not available in the stable Debian package repository, still only being available in the testing repository. As we do not want in general to be pulling packages from the Debian testing repository, we are going to have to install the ’nss_wrapper’ library from source code ourselves.
To be able to do this, we need to ensure that the system packages for ‘make’ and ‘cmake’ are available. We therefore need to add these to the list of system packages being installed.
# Python binary and source dependencies
RUN apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
build-essential \
ca-certificates \
cmake \
curl \
git \
make \
language-pack-en \
libcurl4-openssl-dev \
libffi-dev \
libsqlite3-dev \
libzmq3-dev \
pandoc \
python \
python3 \
python-dev \
python3-dev \
sqlite3 \
texlive-fonts-recommended \
texlive-latex-base \
texlive-latex-extra \
zlib1g-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
We can then later on download the source package for ‘nss_wrapper’ and install it.
# Install nss_wrapper.
RUN curl -SL -o nss_wrapper.tar.gz && \
mkdir nss_wrapper && \
tar -xC nss_wrapper --strip-components=1 -f nss_wrapper.tar.gz && \
rm nss_wrapper.tar.gz && \
mkdir nss_wrapper/obj && \
(cd nss_wrapper/obj && \
cmake -DCMAKE_INSTALL_PREFIX=/usr/local -DLIB_SUFFIX=64 .. && \
make && \
make install) && \
rm -rf nss_wrapper
Updating the Docker entrypoint
At present the Docker ‘ENTRYPOINT’ and ‘CMD’ are specified in the ‘Dockerfile’ as:
ENTRYPOINT [“tini”, “--"]
CMD ["jupyter", "notebook"]
The ‘CMD’ statement in this case is the actual command we want to run to start the Jupyter Notebook application.
We haven’t said anything about what the ‘tini’ program specified by the ‘ENTRYPOINT' is all about as yet, but it is actually quite important. If you do not use ‘tini’ as a wrapper for IPython Notebook then it will not work properly. We will cover what ‘tini’ is and why it is necessary for running IPython Notebook in a subsequent post.
Now because we do require ‘tini’, but we now also want to do some other work prior to actually running the ‘jupyter notebook’ command, we are going to substitute an entrypoint script in place of ‘tini’. We will call this ‘entrypoint.sh’, make it executable, and place it in the top level directory of the repository. After its copied into place, the ‘ENTRYPOINT’ specified in the ‘Dockerfile’ will then need to be:
ENTRYPOINT ["/usr/src/jupyter-notebook/entrypoint.sh"]
The actual ‘entrypoint.sh’ we will specify as:
#!/bin/sh# Override user ID lookup to cope with being randomly assigned IDs using
# the -u option to 'docker run'.USER_ID=$(id -u)if [ x"$USER_ID" != x"0" -a x"$USER_ID" != x"1001" ]; then
NSS_WRAPPER_PASSWD=/tmp/passwd.nss_wrapper
NSS_WRAPPER_GROUP=/etc/groupcat /etc/passwd | sed -e ’s/^ipython:/builder:/' > $NSS_WRAPPER_PASSWDecho "ipython:x:$USER_ID:0:IPython,,,:/home/ipython:/bin/bash" >> $NSS_WRAPPER_PASSWDexport NSS_WRAPPER_PASSWD
export NSS_WRAPPER_GROUPLD_PRELOAD=/usr/local/lib64/libnss_wrapper.so
export LD_PRELOAD
fiexec tini -- "$@"
Note that we still execute ‘tini’ as the last step. We do this using ‘exec’ so that its process will replace the entrypoint script and take over as process ID 1, ensuring that signals get propagated properly, as well as to ensure some details related to process management are handled correctly. We will also pass on all command line arguments given to the entrypoint script to ‘tini’. The double quotes around the arguments reference ensure that argument quoting is handled properly when passing through arguments.
What is now new compared to what was being done before is the enabling of the ‘nss_wrapper’ library. We do not do this though when we are running as ‘root’, were that is that the Docker image was still forced to run as ‘root’ even though the aim is that it run as a non ‘root’ user. We also do not need to do it when we are run with the default user ID.
When run as a random user ID we do two things with the password database file that we will use with ‘nss_wrapper’.
The first is that we change the login name corresponding to the existing user ID of ‘1001’. This is the default ‘ipython’ user account we created previously. We do this by simply replacing the ‘ipython’ login name in the password file when we copy it, with the name ‘builder’ instead.
The second is that we add a new password database file entry corresponding to the current user ID, that being whatever is the random user ID allocated to run the Docker container. In this case we use the login name of ‘ipython’.
The reason for swapping the login names so the current user ID uses ‘ipython’ rather than the original user ID of ‘1001’, is so that the application when run will still think it is the ‘ipython’ user. What we therefore end up with in our copy of the password database file is:
docker run -it --rm -u 10000 -p 8888:8888 jupyter-notebook bash
ipython@0ff73693d433:/notebooks$ tail -2 /tmp/passwd.nss_wrapper
builder:x:1001:0:IPython,,,:/home/ipython:/bin/bash
ipython:x:10000:0:IPython,,,:/home/ipython:/bin/bash
Immediately you can already see that the shell prompt now looks correct. Going back and running our checks from before, we now see:
ipython@0ff73693d433:/notebooks$ whoami
ipython
ipython@0ff73693d433:/notebooks$ id
uid=10000(ipython) gid=0(root) groups=0(root)
ipython@0ff73693d433:/notebooks$ env | grep HOME
HOME=/home/ipython
ipython@0ff73693d433:/notebooks$ touch $HOME/magic
ipython@0ff73693d433:/notebooks$ touch /notebooks/magic
ipython@0ff73693d433:/notebooks$ ls -las $HOME
total 24
4 drwxrwxr-x 4 builder root 4096 Dec 24 10:22 .
4 drwxr-xr-x 6 root root 4096 Dec 24 10:22 ..
4 -rw-rw-r-- 1 builder root 220 Dec 24 10:08 .bash_logout
4 -rw-rw-r-- 1 builder root 3637 Dec 24 10:08 .bashrc
4 drwxrwxr-x 2 builder root 4096 Dec 24 10:08 .jupyter
0 -rw-r--r-- 1 ipython root 0 Dec 24 10:22 magic
4 -rw-rw-r-- 1 builder root 675 Dec 24 10:08 .profile
So even though the random user ID didn’t have an entry in the original system password database file, by using ‘nss_wrapper’ we can trick any applications to use our modified password database file for user information. This means we can dynamically generate a valid password database file entry for the random user ID which was used.
With the way we swapped the login name for the default user ID of ‘1001’, with the random user ID, as far as any application is concerned it is still running as the ‘ipython’ user.
So we can distinguish, any files that were created during the image build as the original ‘ipython’ user will now instead show as being owned by ‘builder’, which if we look it up maps to user ID of ‘1001’.
ipython@0ff73693d433:/notebooks$ id builder
uid=1001(builder) gid=0(root) groups=0(root)
ipython@0ff73693d433:/notebooks$ getent passwd builder
builder:x:1001:0:IPython,,,:/home/ipython:/bin/bash
Running as another name user
Not that there strictly should be a reason for doing so, but it is possible to also force the Docker container to run as some other user ID with an entry in the password database file, but because they have their own distinct primary group assignments, you do have to override the group to be ‘0’ so that it can update any required directories.
$ docker run -it --rm -u 5 -p 8888:8888 jupyter-notebook bash
games@36ec17b1d9c1:/notebooks$ whoami
games
games@36ec17b1d9c1:/notebooks$ id
uid=5(games) gid=60(games) groups=60(games)
games@36ec17b1d9c1:/notebooks$ env | grep HOME
HOME=/home/ipython
games@36ec17b1d9c1:/notebooks$ touch $HOME/magic
touch: cannot touch ‘/home/ipython/magic’: Permission denied
games@36ec17b1d9c1:/notebooks$ touch /notebooks/magic
touch: cannot touch ‘/notebooks/magic’: Permission denied
$ docker run -it --rm -u 5:0 -p 8888:8888 jupyter-notebook bash
games@e2ecabedab47:/notebooks$ whoami
games
games@e2ecabedab47:/notebooks$ id
uid=5(games) gid=0(root) groups=60(games)
games@e2ecabedab47:/notebooks$ env | grep HOME
HOME=/home/ipython
games@e2ecabedab47:/notebooks$ touch $HOME/magic
games@e2ecabedab47:/notebooks$ touch /notebooks/magic
games@e2ecabedab47:/notebooks$ ls -las $HOME
total 24
4 drwxrwxr-x 4 builder root 4096 Dec 24 10:41 .
4 drwxr-xr-x 6 root root 4096 Dec 24 10:41 ..
4 -rw-rw-r-- 1 builder root 220 Dec 24 10:39 .bash_logout
4 -rw-rw-r-- 1 builder root 3637 Dec 24 10:39 .bashrc
4 drwxrwxr-x 2 builder root 4096 Dec 24 10:39 .jupyter
0 -rw-r--r-- 1 games root 0 Dec 24 10:41 magic
4 -rw-rw-r-- 1 builder root 675 Dec 24 10:39 .profile
Running as process ID 1
Finally if we startup the IPython Notebook application localy with Docker, or on OpenShift, then everything still works okay. Further, as well as the ‘getpass.getuser()’ function working, use of ‘pwd.getpwuid(os.getuid())’ also works, this being due to the use of the ‘nss_wrapper’ library.
So everything is now good and we shouldn’t have any issues. There was though something already present in the way that the ‘jupiter/notebook’ Docker image was set up that is worth looking at. This was the use of the ‘tini’ program as the ‘ENTRYPOINT’ in the ‘Dockerfile’. This relates to problems that can arise when running an application as process ID 1. I will look at what this is all about in the next post.
3 comments:
Hello Graham!
Thanks for a great article!
I'm trying to use this "hack" to run a container with jenkins. I use official jenkins image (FROM jenkins/jenkins:lts), the nss_wrapper is installed with no problem, but how to configure entrypoint in this case?
Thanks,
Nadia
I wouldn't use nss_wrapper now. It is easier to make /etc/passwd and /etc/group files writable to group root. Then in entry point script add entries directly to the files such as is done in:
This is much simpler than mucking around with the shared libraries.
Updating the passwd file is even recommended way in OpenShift docs now. See 'Support arbitrary user IDs' in:
Thanks a lot for a reference, it helps me to configure my container.
I use this example: and it works perfect. | http://blog.dscpl.com.au/2015/12/unknown-user-when-running-docker.html?showComment=1521657787725 | CC-MAIN-2021-49 | refinedweb | 2,875 | 59.33 |
I want to create an efficient circular buffer in python (with the goal of taking averages of the integer values in the buffer).
Is this an efficient way to use a list to collect values?
def add_to_buffer( self, num ):
self.mylist.pop( 0 )
self.mylist.append( num )
I would use
collections.deque with a
maxlen arg
>>> import collections >>> d = collections.deque(maxlen=10) >>> d deque([], maxlen=10) >>> for i in xrange(20): ... d.append(i) ... >>> d deque([10, 11, 12, 13, 14, 15, 16, 17, 18, 19], maxlen=10)
There is a recipe in the docs for
deque that is similar to what you want. My assertion that it's the most efficient rests entirely on the fact that it's implemented in C by an incredibly skilled crew that is in the habit of cranking out top notch code. | https://codedump.io/share/9rbMlOk3kYvP/1/efficient-circular-buffer | CC-MAIN-2017-47 | refinedweb | 140 | 74.39 |
10.6 Archie
Archie is a database/index of the numerous FTP sites (and their contents) throughout the world. You can use an Archie client to search the database for specific files. In this example, we will use Brendan Kehoe's Archie client software (version 1.3) to connect to an Archie server and search for user-specified information. Though we could have easily written a client using the socket library, it would be a waste of time, since an excellent one exists. This Archie gateway is based on ArchiPlex, developed by Martijn Koster.
#!/usr/local/bin/perl $webmaster = "Shishir Gundavaram (shishir\@bu\.edu)"; $archie = "/usr/local/bin/archie"; $error = "CGI Archie Gateway Error"; $default_server = "archie.rutgers.edu"; $timeout_value = 180;
The archie variable contains the full path to the Archie client. Make sure you have an Archie client with this pathname on your local machine; if you do not have a client, you have to telnet to a machine with a client and run this program there.
The default server to search is stored. This is used in case the user failed to select a server.
Finally, timeout_value contains the number of seconds after which an gateway will return an error message and terminate. This is so that the user will not have to wait forever for the search results.
%servers = ( 'ANS Net (New York, USA)', 'archie.ans.net', 'Australia', 'archie.au', 'Canada', 'archie.mcgill.ca', 'Finland/Mainland Europe', 'archie.funet.fi', 'Germany', 'archie.th-darmstadt.de', 'Great Britain/Ireland', 'archie.doc.ac.ac.uk', 'Internic Net (New York, USA)', 'ds.internic.net', 'Israel', 'archie.ac.il', 'Japan', 'archie.wide.ad.jp', 'Korea', 'archie.kr', 'New Zealand', 'archie.nz', 'Rutgers University (NJ, USA)', 'archie.rutgers.edu', 'Spain', 'archie.rediris.es', 'Sweden', 'archie.luth.se', 'SURANet (Maryland, USA)', 'archie.sura.net', 'Switzerland', 'archie.switch.ch', 'Taiwan', 'archie.ncu.edu.tw', 'University of Nebrasksa (USA)', 'archie.unl.edu' );
Some of the Archie servers and their IP names are stored in an associative array. We will create the form for this gateway dynamically, listing all of the servers located in this array.
$request_method = $ENV{'REQUEST_METHOD'}; if ($request_method eq "GET") { &display_form ();
The form will be created and displayed if this program was accessed with the browser.
} elsif ($request_method eq "POST") { &parse_form_data (*FORM); $command = &parse_archie_fields ();
All of the form data is decoded and stored in the FORM associative array. The parse_archie_fields subroutine uses the form data in constructing a query to be passed to the Archie client.
$SIG{'ALRM'} = "time_to_exit"; alarm ($timeout_value);
To understand how this array is used, you have to understand that the UNIX kernel checks every time an interrupt or break arrives for a program, and asks, "What routine should I call?" The routine that the program wants called is a signal handler. Perl associates a handler with a signal in the SIG associative array.
As shown above, the traditional way to implement a time-out is to set an ALRM signal to be called after a specified number of seconds. The first line says that when an alarm is signaled, the time_to_exit subroutine should be executed. The Perl alarm call on the second line schedules the ALRM signal to be sent in the number of seconds represented by the $timeout_value variable.
open (ARCHIE, "$archie $command |"); $first_line = <ARCHIE>;
A pipe is opened to the Archie client. The command variable contains a "query" that specifies various command-line options, such as search type and Archie server address, as well as the string to search for. The parse_archie_fields subroutine makes sure that no shell metacharacters are specified, since the command variable is "exposed" to the shell.
if ($first_line =~ /(failed|Usage|WARNING|Timed)/) { &return_error (500, $error, "The archie client encountered a bad request."); } elsif ($first_line =~ /No [Mm]atches/) { &return_error (500, $error, "There were no matches for <B>$FORM{'query'}</B>."); }
If the first line from the Archie server contains either an error or a "No Matches" string, the return_error subroutine is called to return a more friendly (and verbose) message. If there is no error, the first line is usually blank.
print "Content-type: text/html", "\n\n"; print "<HTML>", "\n"; print "<HEAD><TITLE>", "CGI Archie Gateway", "</TITLE></HEAD>", "\n"; print "<BODY>", "\n"; print "<H1>", "Archie search for: ", $FORM{'query'}, "</H1>", "\n"; print "<HR>", "<PRE>", "\n";
The usual type of header information is output. The following lines of code parse the output from the Archie server, and create hypertext links to the matched files. Here is the typical format for the Archie server output. It lists each host where a desired file (in this case, emacs) is found, followed by a list of all publicly accessible directories containing a file of that name. Files are listed in long format, so you can see how old they are and what their sizes are.
Host amadeus.ireq-robot.hydro.qc.ca Location: /pub DIRECTORY drwxr-xr-x 512 Dec 18 1990 emacs Host anubis.ac.hmc.edu Location: /pub DIRECTORY drwxr-xr-x 512 Dec 6 1994 emacs Location: /pub/emacs/packages/ffap DIRECTORY drwxr-xr-x 512 Apr 5 02:05 emacs Location: /pub/perl/dist DIRECTORY drwxr-xr-x 512 Aug 16 1994 emacs Location: /pub/perl/scripts/text-processing FILE -rwxrwxrwx 16 Feb 25 1994 emacs
We can enhance this output by putting in hypertext links. That way, the user can open a connection to any of the hosts with a click of a button and retrieve the file. Here is the code to parse this output:
while (<ARCHIE>) { if ( ($host) = /^Host (\S+)$/ ) { $host_url = join ("", "ftp://", $host); s|$host|<A HREF="$host_url">$host</A>|; <ARCHIE>;
If the line starts with a "Host", the specified host is stored. A URL to the host is created with the join function, using the ftp scheme and the hostname--for example, if the hostname were, the URL would be. Finally, the blank line after this line is discarded.
} elsif (/^\s+Location:\s+(\S+)$/) { $location = $1; s|$location|<A HREF="${host_url}${location}">$location</A>|; } elsif ( ($type, $file) = /^\s+(DIRECTORY|FILE).*\s+(\S+)/) { s|$type|<I>$type</I>|; s|$file|<A HREF="${host_url}${location}/${file}">$file</A>|; } elsif (/^\s*$/) { print "<HR>"; } print; }
One subtle feature of regular expressions is shown here: They are "greedy," eating up as much text as they can. The expression (DIRECTORY|FILE).*\s+ means match DIRECTORY or FILE, then match as many characters as you can up to whitespace. There are chunks of whitespace throughout the line, but the .* takes up everything up to the last whitespace. This leaves just the word "emacs" to match the final parenthesized expression (\S+).
The rest of the lines are read and parsed in the same manner and displayed (see Figure 10.1). If the line is empty, a horizontal rule is output--to indicate the end of each entry.
$SIG{'ALRM'} = "DEFAULT"; close (ARCHIE); print "</PRE>"; print "</BODY></HTML>", "\n";
Finally, the ALRM signal is reset, and the file handle is closed.
} else { &return_error (500, $error, "Server uses unspecified method"); } exit (0);
Remember how we set the SIG array so that a signal would cause the time_to_exit subroutine to run? Here it is:
sub time_to_exit { close (ARCHIE); &return_error (500, $error, "The search was terminated after $timeout_value seconds."); }
When this subroutine runs, it means that the 180 seconds that were allowed for the search have passed, and that it is time to terminate the script. Generally, the Archie server returns the matched FTP sites and its files quickly, but there are times when it can be queued up with requests. In such a case, it is wise to terminate the script, rather than let the user wait for a long period of time.
Now, we have to build a command that the Archie client recognizes using the parse_archie_fields subroutine:
sub parse_archie_fields { local ($query, $server, $type, $address, $status, $options); $status = 1; $query = $FORM{'query'}; $server = $FORM{'server'}; $type = $FORM{'type'}; if ($query !~ /^\w+$/) { &return_error (500, $error, "Search query contains invalid characters.");
If the query field contains non-alphanumeric characters (characters other than A-Z, a-z, 0-9, _), an error message is output.
} else { foreach $address (keys %servers) { if ($server eq $address) { $server = $servers{$address}; $status = 0; } }
The foreach loop iterates through the keys of the servers associative array. If the user-specified server matches the name as contained in the array, the IP name is stored in the server variable, and the status is set to zero.
if ($status) { &return_error (500, $error, "Please select a valid archie host.");
A status of non-zero indicates that the user specified an invalid address for the Archie server.
} else { if ($type eq "cs_sub") { $type = "-c"; } elsif ($type eq "ci_sub") { $type = "-s"; } else { $type = "-e"; }
If the user selected "Case Sensitive Substring", the "-c" switch is used. The "-s" switch indicates a "Case Insensitive Substring". If the user did not select any option, the "-e" switch ("Exact Match") is used.
$options = "-h $server $type $query"; return ($options); } } }
A string containing all of the options is created, and then returned to the main program.
Our last task is a simple one--to create a form that allows the user to enter a query, using the display_form subroutine. The program creates the form dynamically because some information is subject to change (i.e., the list of servers).
sub display_form { local ($archie); print <<End_of_Archie_One; Content-type: text/html <HTML> <HEAD><TITLE>Gateway to Internet Information Servers</TITLE></HEAD> <BODY> <H1>CGI Archie Gateway</H1> <HR> <FORM ACTION="/cgi-bin/archie.pl" METHOD="POST"> Please enter a string to search from: <BR> <INPUT TYPE="text" NAME="query" SIZE=40> <P> What archie server would you like to use (<B>please</B>, be considerate and use the one that is closest to you): <BR> <SELECT NAME="server" SIZE=1> End_of_Archie_One foreach $archie (sort keys %servers) { if ($servers{$archie} eq $default_server) { print "<OPTION SELECTED>", $archie, "\n"; } else { print "<OPTION>", $archie, "\n"; } }
This loop iterates through the associative array and displays all of the server names.
print <<End_of_Archie_Two; </SELECT> <P> Please select a type of search to perform: <BR> <INPUT TYPE="radio" NAME="type" VALUE="exact" CHECKED>Exact<BR> <INPUT TYPE="radio" NAME="type" VALUE="ci_sub">Case Insensitive Substring<BR> <INPUT TYPE="radio" NAME="type" VALUE="cs_sub">Case Sensitive Substring<BR> <P> <INPUT TYPE="submit" VALUE="Start Archie Search!"> <INPUT TYPE="reset" VALUE="Clear the form"> </FORM> <HR> </BODY> </HTML> End_of_Archie_Two }
The dynamic form looks like that in Figure 10.2.
This was a rather simple program because we did not have to deal with the Archie server directly, but rather through a pre-existing client. Now, we will look at an example that is a little bit more complicated.
Back to: CGI Programming on the World Wide Web
© 2001, O'Reilly & Associates, Inc. | http://oreilly.com/openbook/cgi/ch10_06.html | crawl-003 | refinedweb | 1,784 | 63.29 |
I have imported math in my cap
so math.atan2 works if it's in my cap's python scripts
at runtime if I use the Run Python Script action, I can use a script directly from math such as
Run Python Script: "x=math.atan2(1,2)";
but a function defined in the cap which can be called successfully within the cap itself:
def atantwo(first,second):
return math.atan2(first,second);
#x=atantwo(1,1); would work fine here[/code:ggiv2n5i]
but if I use the Run Python Script action at runtime and try:
Run Python Script: "x=atantwo(1,2);"
I get an error:
NameError: global name 'atan2' is not defined
is this a bug, or a limitation, or is there a special way to make it work?
Develop games in your browser. Powerful, performant & highly capable.
It's working fine for me. Could you post a cap that shows the problem.
oh sorry, forgot i posted this, ill try to remember to post a cap when i get home | https://www.construct.net/forum/construct-classic/help-support-using-construct-classic-38/system-runpythonscript-questio-35525 | CC-MAIN-2018-30 | refinedweb | 173 | 71.24 |
I really like Colabotary as its free, uses a GPU (from what I have read) and uses a jupyter book like environment and wanted to see if anyone else was using it for Fastai. I have managed to load the required dependencies and wanted to start a thread we can use for trouble shooting.
Has anyone here successfully used this to train?
Here is the link to Colabotary:
Here is a list of additional dependencies I had to install.
!pip install fastai !pip install opencv-python !apt update && apt install -y libsm6 libxext6 !pip3 install !pip3 install torchvision
For uploading files and linking to your google drive - although having issues using the files once uploaded - probably simple directory issue
from google.colab import files uploaded = files.upload() for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn])))
Linking to your Google cloud account
!pip install -U -q PyDrive) | http://forums.fast.ai/t/colaboratory-and-fastai/10122 | CC-MAIN-2018-13 | refinedweb | 159 | 59.8 |
filequeue 0.3.1
A thread-safe queue object which is interchangeable with the stdlib Queue. Any overflow goes into a compressed file to keep excessive amounts of queued items out of memory
Contents
Overview
filequeue is a Python library that provides a thread-safe queue which is a subclass of Queue.Queue from the stdlib.
filequeue.FileQueue will overflow into a compressed file if the number of items exceeds maxsize, instead of blocking or raising Full like the regular Queue.Queue.
There is also filequeue.PriorityFileQueue and filequeue.LifoFileQueue implementations.
Note filequeue.FileQueue and filequeue.LifoFileQueue will only behave the same as Queue.Queue and Queue.LifoQueue respectively if they are initialised with maxsize=0 (the default). See __init__ docstring for details (help(FileQueue))
Note filequeue.PriorityFileQueue won't currently work exactly the same as a straight out replacement for Queue.PriorityQueue. The interface is very slightly different (extra optional kw argument on put and __init__), although it will work it won't behave the same. It might still be useful to people though and hopefully I'll be able to address this in a future version.
Requirements:
- Python 2.5+ or Python 3.x
Why?
The motivation came from wanting to queue a lot of work, without consuming lots of memory.
The interface of filequeue.FileQueue matches that of Queue.Queue (or queue.Queue in python 3.x). With the idea being that most people will use Queue.Queue, and can swap in a filequeue.FileQueue only if the memory usage becomes an issue. (Same applies for filequeue.LifoFileQueue)
Issues
Any issues please post on the github page.
Changelog
0.3.1 (2013-01-10)
- Added unittests for LifoFileQueue from Queue.
0.3.0 (2013-01-10)
- Added LifoFileQueue implementation that returns the most recently added items first.
- Reverted the file type from gzip to a regular file for the time being.
0.2.3 (2012-11-27)
- Fix for PriorityFileQueue where it wasn't returning items in the correct order according to the priority.
- Added import * into __init__.py to make the namespace a bit nicer.
- Added the unit tests from stdlibs Queue (quickly edited out the full checks and LifoQueue tests).
0.2.2 (2012-11-27)
- Initial public release.
- Downloads (All Versions):
- 14 downloads in the last day
- 114 downloads in the last week
- 348 downloads in the last month
- Author: Paul Wiseman
- Keywords: queue thread-safe file gzip
- License: BSD
- Categories
- Development Status :: 3 - Alpha
- Intended Audience :: Developers
- License :: OSI Approved :: BSD License
- Programming Language :: Python :: 2
- Programming Language :: Python :: 2.5
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3
- Programming Language :: Python :: 3.0
- Programming Language :: Python :: 3.1
- Programming Language :: Python :: 3.2
- Programming Language :: Python :: 3.3
- Topic :: Utilities
- Package Index Owner: GP89
- DOAP record: filequeue-0.3.1.xml | https://pypi.python.org/pypi/filequeue | CC-MAIN-2014-10 | refinedweb | 475 | 61.43 |
11.13. Inheritance and Interfaces¶
An interface in Java is a special type of abstract class that can only contain public abstract methods (every method is assumed to be
public and
abstract even if these keywords are not specified) and public class constants.
List is an interface in Java. Interfaces are declared using the interface keyword. One interface can inherit from another interface.
public interface Checker { boolean check (Object obj); }
The code above declares an interface called
Checker that contains a public abstract method called
check that returns true or false. Classes that implement this interface must provide the body for the
check method.
Another example of an interface in Java is the Iterator interface. It is used to loop through collection classes (classes that hold groups of objects like
ArrayList).
- I only
- Interfaces can also be extended (inherited from).
- II only
- II is true, but I is also true.
- I and II
- Both I and II are true.
- I, II, and III
- You can not create an object of an interface type. You can only create objects from concrete (not abstract) classes.
10-11-1: Which of the following is true about interfaces?
I. Interfaces can only contain abstract methods or class constants. II. Interfaces can be extended. III. Interfaces can be instantiated (you can create an object of the interface type).
11.13.1. What is the purpose of an interface?¶
The purpose of an interface is to separate what you want a type to be able to do (defined by the method signatures) from how it does that. This makes it easy to substitute one class for another if they both implement the same interface and you have declared the variable using the interface type. The
List interface defines what a class needs to be able to do in order to be considered a
List. You have to be able to add an item to it, get the item at an index, remove the item from an index, get the number of elements in the list, and so on. There are several classes that implement the
List interface. You only have to know about
ArrayList for the exam, which is a class that implements the
List interface using an array.
See for the Java documentation for the full definition of the
List interface.)
Interfaces make it easy to write general methods that use the methods defined in the interface. | https://runestone.academy/runestone/static/JavaReview/OOBasics/ooInheritanceAndInterfaces.html | CC-MAIN-2019-26 | refinedweb | 402 | 64.41 |
Essential Math for Data Science: The Poisson Distribution
The Poisson distribution, named after the French mathematician Denis Simon Poisson, is a discrete distribution function describing the probability that an event will occur a certain number of times in a fixed time (or space) interval.
The Poisson distribution, named after the French mathematician Denis Simon Poisson, is a discrete distribution function describing the probability that an event will occur a certain number of times in a fixed time (or space) interval. It is used to model count-based data, like the number of emails arriving in your mailbox in one hour or the number of customers walking into a shop in one day, for instance.
Mathematical Definition
Let’s start with an example, Figure 1 shows the number of emails received by Sarah in intervals of one hour.
The bar heights show the number of one-hour intervals in which Sarah observed the corresponding number of emails. For instance, the highlighted bar shows that there were around 15 one-hour slots where she received a single email.
The Poisson distribution is parametrized by the expected number of events λ (pronounced “lambda”) in a time or space window. The distribution is a function that takes the number of occurrences of the event as input (the integer called k in the next formula) and outputs the corresponding probability (the probability that there are k events occurring).
The Poisson distribution, denoted as Poi is expressed as follows:
for k = 0, 1, 2, ...
The formula of Poi(k; λ) returns the probability of observing k events given the parameter λ which corresponds to the expected number of occurrences in that time slot.
Discrete Distributions
Note that both the binomial and the Poisson distributions are discrete: they give probabilities of discrete outcomes: the number of times an event occurs for the Poisson distribution and the number of successes for the binomial distribution. However, while the binomial calculates this discrete number for a discrete number of trials (like a number of coin toss), the Poisson considers an infinite number of trials (each trial corresponds to a very small portion of time) leading to a very small probability associated with each event.
You can refer to the section below to see how the Poisson distribution is derived from the binomial distribution.
Example
Priya is recording birds in a national park, using a microphone placed in a tree. She is counting the number of times a bird is recorded singing and wants to model the number of birds singing in a minute. For this task, she’ll assume independence of the detected birds.
Looking at the data of the last few hours, Priya observes that in average, there are two birds detected in an interval of one minute. So the value 2 could be a good candidate for the parameter of the distribution λ. Her goal is to know the probability that a specific number of birds will sing in the next minute.
Let’s implement the Poisson distribution function from the formula you saw above:
def poisson_distribution(k, lambd): return (lambd ** k * np.exp(-lambd)) / np.math.factorial(k)
Remember that λ is the expected number of times a bird sings in a one-minute interval, so in this example, you have λ=2. The function
poisson_distribution(k, lambd) takes the value of k and λ and returns the probability to observe k occurrences (that is, to record k birds singing).
For instance, the probability of Priya observing 5 birds in the next minute would be:
poisson_distribution(k=5, lambd=2)
0.03608940886309672
The probability that 5 birds will sing in the next minute is around 0.036 (3.6%).
As with the binomial function, this will overflow for larger values of k. For this reason, you might want to use
poisson from the module
scipy.stats, as follows:
from scipy.stats import poisson poisson.pmf(5, 2)
0.03608940886309672
Let’s plot the distribution for various values of k:
lambd=2 k_axis = np.arange(0, 25) distribution = np.zeros(k_axis.shape[0]) for i in range(k_axis.shape[0]): distribution[i] = poisson.pmf(i, lambd) plt.bar(k_axis, distribution) # [...] Add axes, labels...
The probabilities corresponding to the values of k are summarized in the probability mass function shown in Figure
- You can see that it is most probable that Priya will hear one or two birds singing in the next minute.
Finally, you can plot the function for different values of λ:
f, axes = plt.subplots(6, figsize=(6, 8), sharex=True) for lambd in range(1, 7): k_axis = np.arange(0, 20) distribution = np.zeros(k_axis.shape[0]) for i in range(k_axis.shape[0]): distribution[i] = poisson.pmf(i, lambd) axes[lambd-1].bar(k_axis, distribution) axes[lambd-1].set_xticks(np.arange(0, 20, 2)) axes[lambd-1].set_title(f"$\lambda$: {lambd}") # Add axes labels etc.
Figure 3 shows the Poisson distribution for various values of λ, which looks a bit like a normal distribution in some cases. However, the Poisson distribution is discrete, not symmetric when the value of λ is low, and bounded to zero.
Bonus: Deriving the Poisson Distribution
Let’s see how the Poisson distribution is derived from the Binomial distribution.
You saw in Essential Math for Data Science that if you run a random experiment multiple times, the probability to get mm successes over N trials, with a probability of a success μ at each trial, is calculated through the binomial distribution:
Problem Statement
How can you use the binomial formula to model the probability to observe an event a certain number of times in a given time interval instead of in a certain number of trials? There are a few problems:
- You don’t know NN, since there is no specific number of trials, only a time window.
- You don’t know μμ, but you have the expected number of times the event will occur. For instance, you know that in the past 100 hours, you received an average of 3 emails per hour, and you want to know the probability of receiving 5 emails in the next hour.
Let’s handle these issues mathematically.
To address the first point, you can consider time as small discrete chunks. Let’s call these chunck ϵϵ (pronounced “epsilon”), as shown in Figure 4. If you consider each chunk as a trial, you have N chunks.
The estimation of a continuous time scale is more accurate when ϵ is very small. If ϵϵ is small, the number of segments N will be large. In addition, since the segments are small, the probability of success in each segment is also small.
To summarize, you want to modify the binomial distribution to be able model a very large number of trials, each with a very small probability of success. The trick is to consider that N tends toward infinity (because continuous time is approximated by having a value of ϵ that tends toward zero).
Update the Binomial Formula
Let’s find μμ in this case and replace it in the binomial formula. You know the expected number of event in a period of time t, which we’ll call λ (pronounced “lambda”). Since you split t into small intervals of length ϵ, you have the number of trials:
You have λ as the number of successes in the N trials. So the probability μ to have a success in one trial is:
Replacing μμ in the binomial formula, you get:
Developing the expression, writing the binomial coefficient as factorials (as you did in Essential Math for Data Science), and using the fact
, you have:
Let’s consider the first element of this expression. If you state that NN tends toward infinity (because ϵ tends toward zero), you have:
This is because k can be ignored when it is small in comparison to N. For instance, you have:
which approximates
So you the first ratio becomes:
Then, using the fact that
, you have:
Finally, since
tends toward 1 when N tends toward the infinity:
Let’s replace all of this in the formula of the binomial distribution:
This is the Poisson distribution, denoted as Poi:
for k = 0, 1, 2, ...:
- Essential Math for Data Science: Probability Density and Probability Mass Functions
- Essential Math for Data Science: Integrals And Area Under The Curve
- Free Mathematics Courses for Data Science & Machine Learning | https://www.kdnuggets.com/2020/12/introduction-poisson-distribution-data-science.html | CC-MAIN-2021-43 | refinedweb | 1,395 | 51.89 |
ListModel javascript functor storage
I am still quite new to qt/qml and javascript so this might be newbie question. I have a situation where I have an application that has data stored in a ListModel, when the ListView that owns the model updates an item in the model I want to call a functor that will update the value in a couple places, but every time I try to run the functor stored in the model I get a type error.
@
qrc:///main.qml:50: TypeError: Type error
qrc:///main.qml:50: TypeError: Type error
qrc:///main.qml:50: TypeError: Type error
@
Here is a demo of the issue. I created it using default Qt Quick application in Qt Creator. I am using Qt 5.3.1.
@
import QtQuick 2.2
import QtQuick.Window 2.1
Window {
visible: true
width: 360
height: 360
property int value1: 5 property int value2: 6 property int value3: 7 Component.onCompleted: { function Func(n) { return function(i) { value1 = i; return "This is cell: " + i + " and is associated to property that had value: " + n; } }; view.model.append( { func : new Func(value1) } ); view.model.append( { func : new Func(value2) } ); view.model.append( { func : new Func(value3) } ); } MouseArea { anchors.fill: parent onClicked: { Qt.quit(); } } ListView { id: view anchors.fill: parent model: ListModel {} delegate: Component { Rectangle { width: view.width height: 25 color: "red" border.color: "black" Text { id: test text: { var test = model.func; test(index); } } } } }
}
@
There might be better ways to do this, but I would still like to know what I am doing wrong. Thanks | https://forum.qt.io/topic/48047/listmodel-javascript-functor-storage | CC-MAIN-2022-27 | refinedweb | 260 | 60.01 |
Has ever happened to you that you have a collection and you want to split it into two parts: one that satisfies certain assertion and the other one that doesn’t? In that case, do you resort to use a filter and a filterNot? Don’t worry, in this
teleshopping ad post, we’ll see some not-so-popular but common methods for splitting collections.
takeWhile
For a
Traversable[A] collection, we have the following method:
def takeWhile(p: A => Boolean): Traversable[A]
It gets as parameter the condition that has to be checked by the first N elements to be collected from current collection. Don’t worry, an example illustrates it better:
val numbers = List(2, 4, 5, 6, 7) val firstEven = numbers.takeWhile(_ % 2 == 0) //List(2, 4)
As you can see, the list you get is the result of getting the first elements in the collection while the given assertion is checked.
dropWhile
Like his elder brother,
dropWhile receives as parameter a function, but its behavior is based on removing all elements from the beginning until the given condition is not checked.
E.g.:
val names = List("Julio", "Jose", "Alberto", "Javier") val survivors = names.dropWhile(_.startsWith("J")) //List("Alberto","Javier")
Even though there are some other elements in the list that check the condition, method
dropWhile only drop the first N elements as long as they all check the condition. At the very first moment the assertion is not validated, the method stop removing elements.
span
But as we were talking at the introduction, what happens if I want to apply one of this functions without loosing the remaining elements in the collection? In that case,
span is your friend.
Its signature is:
def span(p: A => Boolean): (Traversable[A], Traversable[A])
And the way it works, just to picture it, is returning, for a
t collection and a constraint(function)
f,
(t takeWhile f, t dropWhile f), but quoting Scala api) “possibly[sic] more efficient than”.
Examples, examples everywhere…
case class Event(timeStamp: Long) val events: Stream[Event] = ??? val systemCrashTimestamp: Long = ??? val (eventsBeforeCrash,eventsAfterCrash) = events.span(_.timeStamp <= systemCrashTimeStamp)
In this example, we’re modeling possible events that may happen to a system. By reading some
Stream, we access all events that occurred to the system to monitorize. On the other hand, we’re notified that some fatal-terrible error take place in the system (
systemCrashTimeStamp).
For splitting events that took place before the death-fatal-error, from the other that happened later; we can use
span (et voilà!)
partition
Ok then, if you looked closer before with
takeWhile and
dropWhile examples, a lil’ problem could be inferred: if you split collections this way, takeWhile only took first elements that checked the condition, but not all of them.
A first logical approach (that you may have used at some point), is to write something like this:
val numbers = List(2, 3, 4, 5, 6, 7) val isEven: Int => Boolean = _ % 2 == 0 val even = numbers.filter(isEven) val odd = numbers.filterNot(isEven)
Not bad. But like method
span, we can think about
partition like a method that, given a collection called
t and a function
f, behaves as follows:
(t filter f, t filterNot f); making implementation much easier (and “possibly[sic] more efficient than”):
val numbers = List(2, 3, 4, 5, 6, 7) val isEven: Int => Boolean = _ % 2 == 0 val (even, odd) = numbers.partition(isEven)
Until next tip.
Peace out! | https://scalerablog.wordpress.com/2015/10/19/traversable-ops-partition-span-among-many-other-things/ | CC-MAIN-2018-05 | refinedweb | 578 | 59.13 |
Introduction:
Here I will explain how to solve the Service on local computer started and then stopped, some services stop automatically if there are not in use by other services or programs
Description:
“The Service on local computer started and then stopped ,Some services stop automatically if there are not in use by other services or programs.”
Now I will explain how to solve the Service on local computer started and then stopped, some services stop automatically if there are not in use by other services or programs.
To solve this problem we have two ways
First Way
Start --> Run --> Type Services.msc and click enter button now you will get all the services in your computer after that select your service and right click on that and go to Properties
After that open Select Log On tab in that select Local System Account and click ok now your problem will solve
Otherwise check second way.
Second Way
First right click on My Computer and select Manage now computer management window will open
In that window select Application log and right click on it and select properties now another window will open in that change maximum log size to bigger value and check Overwrite events when needed
38 comments :
Good Suresh......:)
@mis solutions
Please don't post spam comments
@Priyanshu
Please don't post spam comments
Guys no need of looking into logs and all
just put VIA protocol disabled from configuration manager. And try with restarting those services..
more details at
please check the properties of service
I have done the same thing but its not working still the same error when i check the appl log file it shows the below error
Service cannot be started. System.Configuration.ConfigurationErrorsException: The binding at system.serviceModel/bindings/netTcpBinding does not have a configured binding named 'TCPBindingBasicExample.WCFServiceHost.MyServiceBehaviour'. This is an invalid value for bindingConfiguration. (E:\My-VS-Project\TCPBindingBasicExample\WCFServiceHost\bin\Debug\WCFServiceHost.exe.Config line 44) Sys...
have any idea ??
this solution doesn't work for me...Any one with other solution????????????
Service cannot be started. System.Web.HttpException: The transport failed to connect to the server.
---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Runtime.InteropServices.COMException (0x80040213): System.Web.Mail.SmtpMail.LateBoundAccessHelper.CallMethod(Type type, Object obj, String methodName, Object[] args)
at System.Web.Mail.SmtpMail.LateB...
super :) it worked...
Not working for me,it is still showing same message
not working for me as well.
First way worked perfectly. Thank you!
Thanks Suresh... helpful !!
Still am getting these error i have done those two steps
The Service on local computer started and then stopped ,Some services stop automatically if there are not in use by other services or programs
This not work for me........
i check 2 way but not work .........some message display .....
“The Service on local computer started and then stopped ,Some services stop automatically if there are not in use by other services or programs.”
how to solve plz help me......
Neither solution worked for me. 0/2
sorry to say sir both methods don't worked .....plz suggest the solution to this error
I have written a email code in windows service.
Without email code it is working very well but when I am using email code then it is showing the same error. I have already tried both above solution but still there is problem. Please help me. below is the code...
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Linq;
using System.ServiceProcess;
using System.Text;
using System.IO;
using System.Timers;
using System.Net;
using System.Net.Mail;
using System.Data.SqlClient;
namespace WinEmailService
{
public partial class EmailScheduleService : ServiceBase
{
Timer timer = new Timer();
public EmailScheduleService()
{
InitializeComponent();
}
protected override void OnStart(string[] args)
{
//add this line to text file during start of service
TraceService("start service");
//handle Elapsed event
timer.Elapsed += new ElapsedEventHandler(OnElapsedTime);
//This statement is used to set interval to 1 minute (= 60,000 milliseconds)
timer.Interval = 60000;
//enabling the timer
timer.Enabled = true;
}
protected override void OnStop()
{
timer.Enabled = false;
TraceService("stopping service");
}
private void OnElapsedTime(object source, ElapsedEventArgs e)
{
TraceService("Another email send at " + DateTime.Now);
}
private void TraceService(string content)
{
//set up a filestream
FileStream fs = new FileStream(@"d:\EmailSendTest.txt", FileMode.OpenOrCreate, FileAccess.Write);
//set up a streamwriter for adding text
StreamWriter sw = new StreamWriter(fs);
//find the end of the underlying filestream
sw.BaseStream.Seek(0, SeekOrigin.End);
//add the text
sw.WriteLine(content);
//add the text to the underlying filestream
sw.Flush();
//close the writer
sw.Close();
SmtpClient m = new SmtpClient("smtp.rediffmailpro.com");
m.Port = 587;
m.UseDefaultCredentials = true;
MailAddress To = new MailAddress("impraveen26@gmail.com");
MailAddress From = new MailAddress("abc@gmail.com");
MailMessage mm = new MailMessage(From, To);
mm.Subject = "Automatic mail--- Testing -----";
mm.IsBodyHtml = true;
mm.Body = "Testing......";
m.Credentials = new System.Net.NetworkCredential("abc@gmail.com", "password");
m.DeliveryMethod = SmtpDeliveryMethod.Network;
m.Send(mm);
}
}
}
I also have the same prob, can some one please help me out?
nice post sir ur post is very helpfull for me every time
have a same problem and cannot fix it
I have got same problem and used your solution for it. But it is not solve. please help me asap.
Thank you.
helpfulllllll (Y)
It's not working..:(
I tried both options still facing the same issue. even i am logged in with administrator account.
I have tried both ways but not working showing same error with reason in log event
The installation if sql server agent is disable the edition of sql server that installed this service does not support sql agent
windows 7 users ->Right Click on "MyComputer" ->Manage->System Tools -> Windows Logs ->Application
if you have any error in window service will show there
double click on error will get exact problem!!!
Very Helpful
Check port number in IIS whether it is already consumed by other service or not. If yes then Stop that and then try to start your service.
:- sandeshsalunke55@gmail.com
while installation the folder paths should not have blank spaces , it works perfectly.
This is possibly shown when system do not have enough memory hence you need to alter the Xmx and Xms parameters. This is also possible if system do not have enough space in the hard disk. This can also happen if the service needs to be started from administrator privilege hence you need to give logon of the particular user | https://www.aspdotnet-suresh.com/2011/06/service-on-local-computer-started-and.html | CC-MAIN-2018-17 | refinedweb | 1,085 | 59.09 |
To import JSON into your TypeScript code, you need to add the following code to a typings file (a file with a name like
*.d.ts, say,
json.d.ts—but it does not necessarily need to say json)1.
// This will allow you to load `.json` files from disk declare module "*.json" { const value: any; export default value; } // This will allow you to load JSON from remote URL responses declare module "json!*" { const value: any; export default value; }
After doing this, you can do the following in TypeScript.
import * as graph from './data/graph.json'; import data from "json!";
You can then use
graph and
data as JSON objects in your TypeScript code.
I used this code to load a Dynamo JSON graph into TypeScript — just change the
.dyn extension to
.json and it will work with this code. | http://nono.ma/says/load-a-json-file-with-typescript | CC-MAIN-2018-39 | refinedweb | 140 | 78.04 |
Important changes to forums and questions
All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com.
4 months ago.
Printing to serial disables RX IRQ
It seems that printing to a serial port breaks other serial ports' interrupts.
What should happen:
- LED blinks on receive
- After 5 seconds text gets printed to PC
- LED continues blinking
What actually happens:
- LED blinks on receive
- After 5 seconds text gets printed to PC
- LED no longer blinks
My board: NRF52-DK
Mbed-OS version: 5.14.0
ARM Compiler 6
The code:
#include "mbed.h" Serial pc(USBTX, USBRX); RawSerial gps(NC, P0_28); DigitalOut led(LED1, 1); void rxIrq() { led = !led; do { gps.getc(); } while (gps.readable()); } int main() { gps.baud(9600); pc.baud(115200); gps.attach(&rxIrq); ThisThread::sleep_for(5000); pc.puts("ABABABAABABABA\n"); return 0; }
1 Answer
4 months ago.
Hello Carbon,
I think it's because
return 0;
terminates the program. Try to delete it or in case you build with Mbed OS 2 (aka mbed classic) replace it with
while (true) {}
The problem was that the nRF52832 had only 1 UART peripheral. Printing to the PC caused the MCU to connect the peripheral to different pins (USBTX, USBRX) which broke the interrupt.posted by Carbon . 04 Dec 2019 | https://os.mbed.com/questions/87600/Printing-to-serial-disables-RX-IRQ/ | CC-MAIN-2020-16 | refinedweb | 222 | 68.16 |
Tell us what you think of the site.
Hi,
I am using Maya 2011(64bit) and MySQL 5.5 (64 bit) in Windows 7 (64 bit) machine. I tried to connect maya with Mysqldb through python. So i copied the connector files into maya\python\lib\site packages. I was able to import MYsqldb module without any error. But when i tried call the cursor object(for querying), I found that Maya is not recognizing the cursor object.
Here is my sample code:
import MySQLdb as mb
import maya.cmds as cmds
def mysql_connect(hostname, username, password, dbname):
db = mb.connect(host=hostname,user=username,passwd=password,db=dbname)
db = mysql_connect("localhost", “root”, “test”, “mydbt")
dbcursor = db.cursor()
dbcursor.execute("select * from maya")
# Error: AttributeError: ‘NoneType’ object has no attribute ‘cursor’ #
I tried verifying the env-path variables, replacing the connector files but still the problem persists.
Since being a beginner, i am un-able to identify the exact issue.
Any suggestions?
newbee_
[/size]
Hey newbee_,
If you can import the module without any trouble it can’t be an issue with your env. It seems your function does not return the connection, so you cant request a cursor from it.
import MySQLdb as mb
import maya.cmds as cmds
def mysql_connect(hostname, username, password, dbname):
# try to establish a connection and return the connection
try:
conn = mb.connect(host=hostname,user=username,passwd=password,db=dbname)
return conn
except MySQLdb.Error, e:
# in case it fails, return an error
print "MySQL error %d: %s" % (e.args[0], e.args[1])
# return None when there is no connection
return None
db = mysql_connect("localhost", “root”, “test”, “mydbt")
# check if db is None, if so, don't create a cursor
if not db is None:
dbcursor = db.cursor()
dbcursor.execute("SELECT * FROM maya")
print dbcursor.fetchall() #output the results
It has some error handling in it, in case you get errors.
Hope this helps,
Jeroen
Hi Jeroen,
I failed to return the connection. So it returned “none”.
Thank you Jeroen | http://area.autodesk.com/forum/autodesk-maya/python/connecting-maya-2011-with-mysqldb/ | crawl-003 | refinedweb | 338 | 59.09 |
Post Syndicated from Assaf Namer original environments, and you can adapt your existing processes, tools, and methodologies for use in the AWS Cloud. For more details about best practices for monitoring your AWS resources, see the “Manage Security Monitoring, Alerting, Audit Trail, and Incident Response” section in the AWS Security Best Practices whitepaper.
This blog post focuses on how to log and create alarms on invalid Secure Shell (SSH) access attempts. Implementing live monitoring and session recording facilitates the identification of unauthorized activity and can help confirm that remote users access only those systems they are authorized to use. With SSH log information in hand (such as invalid access type, bad private keys, and remote IP addresses), you can take proactive actions to protect your servers. For example, you can use an AWS Lambda function to adjust your server’s security rules when an alarm is triggered that indicates an invalid SSH access attempt.
In this post, I demonstrate how to use Amazon CloudWatch Logs to monitor SSH access to your application servers (Amazon EC2 Linux instances) so that you can monitor rejected SSH connection requests and take action. I also show how to configure CloudWatch Logs to send SSH access logs from application servers that reside in a public subnet. Last, I demonstrate how to visualize how many attempts are made to SSH into your application servers with bad private keys and invalid user names. Using these techniques and tools can help you improve the security of your application servers.
AWS services and terminology I use in this post
In this post, I use the following AWS services and terminology:
- Amazon CloudWatch – A monitoring service for the resources and applications you run on the AWS Cloud. You can use CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.
- CloudWatch namespaces – Containers for metrics. Metrics in different namespaces are isolated from each other so that metrics from different applications are not mistakenly aggregated into the same statistics. You also can create custom metrics for which you must specify namespaces as containers.
- CloudWatch Logs – A feature of CloudWatch that allows you to monitor, store, and access your log files from EC2 instances, AWS CloudTrail, and other sources. Additionally, you can use CloudWatch Logs to monitor applications and systems by using log data and create alarms. For example, you can choose to search for a phrase in logs and then create an alarm if the phrase you are looking for is found in the log more than 5 times in the last 10 minutes. You can then take action on these alarms, if necessary.
- Log stream – A log stream represents the sequence of events coming from an application instance or resource that you are monitoring. In this post, I use the EC2 instance ID as the log stream identifier so that I can easily map log entries to the instances that produced the log entries
- Log group – In CloudWatch Logs, a group of log streams that share the same retention time, monitoring, and access control settings. Each log stream must belong to one log
- Metric – A specific term or value that you can monitor and extract from log events.
- Metric filter – A metric filter describes how Amazon CloudWatch Logs extracts information from logs and transforms it into CloudWatch metrics. It defines the terms and patterns to look for in log data as the data is sent to CloudWatch Logs. Metric filters are assigned to log groups, and all metric filters assigned to a given log group are applied to their log stream—see the following diagram for more details.
- SSH logs – Reside on EC2 instances and capture all SSH activities. The logs include successful attempts as well as unsuccessful attempts. Debian Linux SSH logs reside in /var/log/auth.log, and stock CentOS SSH logs are written to /var/log/secure. This blog post uses an Amazon Linux AMI, which also logs SSH sessions to /var/log/secure.
- AWS Identity and Access Management (IAM) – IAM enables you to securely control access to AWS services and resources for your users. In the solution in this post, you create an IAM policy and configure an EC2 instance that assumes a role. The IAM policy allows the EC2 instance to create log events and save them in an Amazon S3 bucket (in other words, CloudWatch Logs log files are saved in the S3 bucket).
-.
Architectural overview
The following diagram depicts the services and flow of information between the different AWS services used in this post’s solution.
Here is how the process works, as illustrated and numbered in the preceding diagram:
- A CloudWatch Logs agent runs on each EC2 instance. The agents are configured to send SSH logs from the EC2 instance to a log stream identified by an instance ID.
- Log streams are aggregated into a log group. As a result, one log group contains all the logs you want to analyze from one or more instances.
- You apply metric filters to a log group in order to search for specific keywords. When the metric filter finds specific keywords, the filter counts the occurrences of the keywords in a time-based sliding window. If the occurrence of a keyword exceeds the CloudWatch alarm threshold, an alarm is triggered.
- An IAM policy defines a role that gives the EC2 servers permission to create logs in a log group and send log events (new log entries) from EC2 to log groups. This role is then assumed by the application servers.
- CloudWatch alarms notify users when a specified threshold has been crossed. For example, you can set an alarm to trigger when more than 2 failed SSH connections happen in a 5-minute period.
- The CloudWatch dashboard is used to visualize data and alarms from the monitoring process.
Deploy and test the solution
1. Deploy the solution by using CloudFormation
Now that I have explained how the solution works, I will show how to use AWS CloudFormation to create a stack with the desired solution configuration. CloudFormation allows you to create a stack of resources in your AWS account.
- Sign in to the AWS Management Console, choose CloudFormation, choose Create Stack, choose Specify an Amazon S3 template URL and paste the following link in the box:
- Choose Launch to deploy the stack.
- On the Specify Details page, enter the Stack name. Then enter the KeyName, which is the SSH key pair for the region you use. I use this key-pair later in this post; if you don’t have a key pair for your current region, follow these instructions to create one. The OperatorEmail is the CloudWatch alarm recipient email address (this field is mandatory to launch the stack), which is the email address to which SSH activity alarms will be sent. You can use the SSHLocation box to limit the IP address range that can access your instances; the default is 0.0.0/0, which means that any IP can access the instance. After specifying these variables, click Next.
- On the Options page, tag your instance, and click Next. Tags allow you to assign metadata to AWS resources. For example, you can tag a project’s resources and then use the tag to manage, search for, and filter resources. For more information about tagging, see Tagging Your Amazon EC2 Resources.
- Wait until the CloudFormation template shows CREATE_COMPLETE, as shown in the following screenshot. This means your stack was created successfully.
After the stack is created successfully, you have two distinct application servers running, each with a CloudWatch agent. These servers represent a fleet of servers in your infrastructure. Choose the Outputs tab to see more details about the resources, such as the public IP addresses of the servers. You will need to use these IP addresses later in this post in order to trigger log events and alarms.
The CloudWatch log agent on each server is installed at startup and configured to stream SSH log entries from /var/log/secure to CloudWatch via a log stream. CloudWatch aggregates the log streams (ssh.log) from the application servers and saves them in a CloudWatch Logs log group. Each log stream is identified by an instance-ID, as shown in the following screenshot.
The application servers assume a role that gives them permissions to create CloudWatch Logs log files and events. CloudFormation also configures two metrics: ssh/InvalidUser and ssh/Disconnect. The ssh/InvalidUser metric sends an alarm when there are more than 2 SSH attempts into any server that include an invalid user name. Similarly, the ssh/Disconnect metric creates an alarm when more than 10 SSH disconnect requests come from users within 5 minutes.
To review the metrics created by CloudFormation, choose Metrics in the CloudWatch console. A new SSH custom namespace has been created, which contains the two metrics described in the previous paragraph.
You should now have two application servers running and two custom CloudWatch metrics and alarms configured. Now, it’s time to generate log events, trigger alarms, and test the configurations.
2. Test SSH metrics and alarms
Now, let’s try to trigger an alarm by trying to SSH with an invalid user name into one of the servers. Use the key pair you specified when launching the stack and connect to one of the Linux instances from a terminal window (replace the placeholder values in the following command).
Now, exit the session and try to sign in as bad-user, as shown in the following command.
The following command is the same as the previous command, but with the placeholder values replaced by actual values.
Because the alarm triggers after two or more unsuccessful SSH login attempts with an invalid user name in 1 minute, run the preceding command a few times. The server’s log captures the bad SSH login attempts, and after a minute, you should see InvalidUserAlarm in the CloudWatch console, as shown in the following screenshot. Choose Alarms to see more details. The alarm should disappear after another minute if there are no more SSH login attempts.
You can also view the history of your alarms by choosing the History tab. CloudWatch metrics are saved for 15 months.
When the CloudFormation stack launches, a topic-registration email is sent to the email address you specified in the template. After you accept the topic registration, you will receive an alarm email with details about the alarm. The email looks like what is shown in the following screenshot.
3. Understanding CloudWatch metric filters and their transformation
The CloudFormation template includes two alarms, InvalidUserAlarm and SSHReceiveddisconnectAlarm, and two metric filters. As I mentioned previously, the metric filters define the pattern you want to match in a CloudWatch Logs log group. When a pattern is found, it transforms into an Amazon metric as defined in the MetricTransformations section of the metric filter.
The following is a snippet of the InvalidUser metric filter. Each pattern match—denoted by FilterPattern—is counted as one metric value as specified in the MetricValue parameter in the MetricTranformations section. The CloudWatch alarm associated with this metric filter will be triggered when the metric value crosses a specified threshold.
When a CloudWatch alarm is triggered, the service sends an email to an Amazon SNS topic with more details about the alarm type, trigger parameters, and status.
4. Create a CloudWatch metric filter to identify attempts to SSH into your servers with bad private keys
You can create additional metric filters in CloudWatch Logs to provide better visibility into the SSH activity on your servers. Let’s assume you want to know if there are too many attempts to SSH into your servers with bad private keys. If an attempt is made with a bad private key, a line like the following is logged in the SSH log file.
You can produce this log line by modifying the pem file you are using (a pem file holds your private key). In a terminal window, modify your private key by copying and pasting the following lines in the same directory where your key resides.
These lines simply change the characters at positions 25 and 26 from their current value to the character A, keeping the original pem file intact. Alternatively, you can use nano <valid-keys>.pem from the command line or any other editor, change a character, save the file as bad-keys.pem, and exit the file.
Now, try to use bad-keys.pem to access one of the application servers.
The SSH attempt should fail because you are using a bad private key.
Now, let’s look at the server’s ssh.log file from the CloudWatch Logs console and analyze the error log messages. I need to understand the log format in order to configure a new filter. To review the logs, choose Logs in the navigation pane, and select the log group that was created by CloudFormation (it starts with the name you specified when launching the CloudFormation template).
In particular, notice the following line when you try to SSH with a bad private key.
Let’s add a metric filter to capture this line so that we can use this metric later when we build an SSH Dashboard. Copy the following line to the Filter events search box at the top of the console screen and press Enter.
You can now see only the lines that match the pattern you specified. These are the lines you want to count and transform into metrics. Each string in the message is represented by a word in the filter. In our example, we are looking for a pattern where the sixth word is Connection and the seventh word is closed. Other words in the log line are not important in this context. The following image depicts the mapping between a string in a log file and a metric filter.
To create the metric filter, choose Logs in the navigation pane of the CloudWatch console. Choose the log groups to which you want to apply the new metric filter and then choose Create Metric Filter. Choose Next.
Paste the filter pattern we used previously (the sixth word equals Connection and the seventh word equals closed) in the Filter Pattern box. Select the server you tried to sign in to with the bad private key to Select Log Data to Test and click Test Pattern. You should see the results that are shown in the following screenshot. When completed, click Assign Metric.
Type SSH for the Metric Namespace and sshClosedConnection-InvalidKeysFilter for Filter Name. Choose Create Filter to see your new metric filter listed. You can use the newly created metric filter to graph the metrics and set alarms. The alarms can be used to inform your administrator via email of any event you specify. In addition, metrics can be used to generate SNS notification to trigger an AWS Lambda function in order to take proactive actions, such as blocking suspicious IP addresses in a security group.
Choose Create Alarm next to Filter Name and follow the instructions to create a CloudWatch alarm.
Back at the Metrics view, you should now have three SSH metric filters under Custom Namespaces. Note that it can take a few minutes for the number of SSH metrics to update from two to three.
5. Create a graph by using a CloudWatch dashboard
After you have configured the metrics, you can display SSH metrics in a graph. CloudWatch dashboards allow you to create reusable graphs of AWS resources and custom metrics so that you can quickly monitor your operational status and identify issues at a glance. Metrics data is kept for a period of two weeks.
In the CloudWatch console, choose Dashboards in the navigation pane, and then choose Create dashboard to create a new graph in a dashboard. Name your dashboard SSH-Dashboard and choose Create dashboard. Choose Line Graph from the graph options and choose Configure.
In the Add metric graph window under Custom Namespace, choose SSH > Metrics with no dimensions. Select all three metrics you have configured (the CloudFormation template configured two metrics and you manually added one more metric).
By default, the metrics are displayed on the graph as an average. However, you configured metrics that are based on summary metrics (for example, the total number of alarms in two minutes). To change the default, choose the Graphed metrics tab, and change the statistic from Average to Sum, as shown in the following screenshot. Also, change the time period from 5 minutes to 1 minute.
Your graphed metrics should look like the following screenshot. When you have provided all the necessary information, choose Create Widget.
You can rename the graph and add static text to give the console more context. To add a text widget, choose Widget and select text. Then edit the widget with markdown language. Your dashboard may then look like the following screenshot.
The consolidated metrics graph displays the number of SSH attempts with bad private keys, invalid user names, and too many disconnects.
Conclusion
In this blog post, I demonstrated how to automate the deployment of the CloudWatch Logs agent, create filters and alarms, and write, test, and apply metrics on the fly from the AWS Management Console. You can then visualize the metrics with the AWS Management Console. The solution described in this post gives you monitoring and alarming capabilities that can help you understand the status of and potential risks to your instances and applications. You can easily aggregate logs from many servers and different applications, create alarms, and display logs’ metrics on a graph.
If you have comments about this post, submit them in the “Comments” section below. If you have questions about the solution in this post, start a new thread on the CloudWatch forum.
– Assaf | https://noise.getoto.net/author/assaf-namer/ | CC-MAIN-2018-26 | refinedweb | 2,970 | 61.77 |
web hosting shopping cart
web hosting shopping cart What is web hosting shopping cart? Please... for hosting the shopping cart application. Since shopping cart application... hosting companies specialized in shopping cart hosting will help you configuring
jsp code for shopping cart
jsp code for shopping cart please provide me the jsp code for online shopping cart
Free Java Shopping Cart,Shopping cart Application
Free Java Shopping Cart application and setup your online store. Shopping cart... and Installing Shopping Cart Application
Free Shopping Cart Application
Designing and develop a shopping cart application using Java technologies
Shopping Cart
Shopping Cart
In this section we will discuss about shopping cart and how you can use
different open source shopping cart for hosting your shopping... of the easiest
way to sale products, you have to setup shopping cart application, add your
shopping cart
shopping cart hi i want the sample project for shopping cart in struts with mysql ,if any one knows means please help me
shopping cart using sstruts - Struts
shopping cart using sstruts Hi,
This is question i asked ,u send one link.but i cant able to ddownload the file .please send the code file... with my sql.
Please send the examples code as soon as possible.
please send
What is shopping cart?
is
not stored by shopping cart application.
The shopping cart software helps....
Following are the components of a shopping cart application:
A database... the
performance of your shopping portal. In a good shopping cart application
Open Source Shopping Cart
open source shopping cart. You can download and
modify the source code... can modify source code of shopping cart as
per your needs.
The open...Open Source Shopping Cart
Open Source Shopping carts software
jsp shopping cart - JSP-Servlet
. but in that i require the code for how to insert a item into cart and also delete from it and how could I add more than one item into the cart. please give me an idea for it. sir please give me reply as soon as possible.thank u Hi
Shopping cart Application
Shopping Cart Application
... forum
Download and build from source
a) Preliminaries
i) Tools required to build Simple Cart
Shopping Cart design
in development of shopping cart
using open source technologies such as Java, .NET... for your site and use open
source shopping cart of your choice to create online... source shopping cart:
PRESTA shop
OPEN Cart
Magento... of how to get the app up and running. Urgent help please and thanks a lot
please give me an example about shopping cart using spring and hibernate
please give me an example about shopping cart using spring and hibernate who can give me an ex about shoppingcart using spring and hibernate intergration ?
thanks alot
Choosing the Best Ecommerce Shopping Cart
to go for these
open source shopping cart software and save lot of money.
See...;
OpenCart
OpenCart is open source shopping software supporting 20+ Payment...Choosing the Best Ecommerce Shopping Cart
In this articles explains you how
shoping items billng source code
shoping items billng source code Hi, I am doing a project on invoice... in the data base, so please help me in this,
<%@page import="java.util.Iterator... is a jsp application, where user is allowed to select the item code from
To create a shopping cart
To create a shopping cart I want to create a online shopping cart using struts and hibernate.
I created my first page. Now i want to provide... for different products as mentioned above.
Hope someone will help me out.
thnx
Web hosting with shopping cart
for your shopping cart
application. Since the performance of the website is very...; a shopping
cart application...Web hosting with shopping cart
If you are looking for hosting your shopping
add to cart
add to cart sir,
i want to do add to cart to my shopping application each user using sessions.
Plz help thnaks in advance
online shopping cart complete coding in pure jsp
online shopping cart complete coding in pure jsp online shopping cart complete coding in pure jsp
Please visit the following link:
JSP Online shopping cart
online shopping project
online shopping project sir,
plz can u send me the coding of simple application of online shopping cart project using jsp and servlets which should be run in netbeans without any errors
source code
source code sir...i need an online shopping web application on struts and jdbc....could u pls send me the source code
Maximize Sales By Setting up Your Shopping Cart
Maximize Sales By Setting up Your Shopping Cart
Setting up a shopping cart... of the tools available in a wise manner. The shopping cart solution comes in varied... to and even works along with other features of the shopping cart solution
source code - JSP-Servlet
source code I want source code for Online shopping Cart.
Application Overview
The objective of the Shopping Cart is to provide an abstract view... functional application.
System Scope
Shopping Cart is a web application intended
Online Shopping Cart Services
Knowing About Online Shopping Cart Services
The whole world has become... reasons, now people depend on e-commerce shopping cart. One of them is the security these websites provide. These shopping cart solutions offer the users to do
struts opensource framework
struts opensource framework i know how to struts flow ,but i want struts framework open source code
Open Source e-commerce
of e-commerce; a free, user-friendly, open source shopping cart system... an open source shopping cart as the engine behind your site gives you a thoroughly... code and J2EE best practices.
Open Source E Software
. Open source software
is very useful because source code also distributed along with the executable of
the application. Anyone can modify the source code as per their needs, compile
and use the application.
Open source software
Error in Shopping Cart project - Development process
Error in Shopping Cart project I tried running the shopping cart project given in the below link:... when I submit "" I received the following error
Tools required to build Simple Cart
Tools required to build Simple Cart
Shopping cart application is written in Java and so you can compile with a standard JDK on any platform computer.
The following
complete coding for shopping card
complete coding for shopping card complete coding for shopping card
Please visit the following link:
Shopping Cart Application
code and specification u asked - Java Beginners
code and specification u asked you asked me to send the requirements in detail and the code too.so iam sendind you the specification...();
display.dispose();
}
}
so i have sent u the code
How to prevent adding duplicate items to the shopping cart
;?php
session_start();
if (!isset($_SESSION['SHOPPING_CART'])){ $_SESSION['SHOPPING_CART'] = array(); }
if (isset($_GET['itemID']) && isset($_GET...']
);
$_SESSION['SHOPPING_CART'][] = $ITEM;
header('Location
Open Source PHP
with phc.
PHP shopping cart with open source code
Most... by hindering competition and plagiarism. X-Cart is a software with open source code. We... with open source code is a good way to get the right features quickly.
Open Source GPS
Open Source GPS
Open
Source GPSToolKit
The goal of the GPSTk project is to provide a world class, open source computing suite to the satellite....
Open Source GPS Software
Working with GPS
Open Source Jobs
its original design free of charge. Open source code is typically created...Open Source Jobs
Open Source Professionals
Search firm specializing in the placement of Linux / Open Source professionals, providing both
Shopping Cart Products
Know To Sell The Right Products On E-Commerce Shopping Cart
E-commerce... differently can you present them with your e-commerce shopping cart? What kind of offers... as this is the fastest way of shopping and shipping products to customers. Many online
online shopping - Java Beginners
online shopping Respected Sir,
Sir please help me
how to handle online shooping and shopping cart by click on image by
only using jsp
error please send me the solution
error please send me the solution HTTP Status 500 -
type Exception...
java.sql.DriverManager.getConnection(Unknown Source)
java.sql.DriverManager.getConnection(Unknown Source)
DisplayServlet.doPost(DisplayServlet.java:56
Java is an opensource program or not? why???
Java is an opensource program or not? why??? Java is an open source program or not.. why
online shopping
to save.industry which one prefers storing or saving.explain and give the source code for that. please help me sir as soon as possible...online shopping Hai sir/madam,
i'm working on online
Open Source Exchange
;
DDN Open Source Code Exchange
The DDN site...Open Source Exchange
Exchange targeted by open-source group
A new open-source effort dubbed
Introduction To Application
Introduction To Application
The shopping cart application allows customer... of online shopping cart
which allows online shopping. It is developed for the readers who wants to know
how a shopping cart application can be written in struts
Open-source software
of the application. But if the software is Open-source then the source code is also...Open-source software Hi,
What is Open-source software? Tell me the most popular open source software name?
Thanks
Hi,
Open-source
Open Source Project Management
Open Source Project Management
Open Source Project Management... to access our support forums.
An Open-Source Based... provide these features thanks to the open-source nature of ]project-open
Open Source Community
developers tell me over and over again that now there is no myth of open source..., open source or not; neither open source nor proprietary code should be considered...Open Source Community
Open
Source Research Community
Data Access Object
();
}
return users;
}
The Complete code of the application is given in the end... interface to access data.
The DAO given with application, uses hibernate to access.... Following is a code to get a connection
with the database using annotation
Open Source Groupware
Open Source Groupware
Open
Groupware
Open Group... Software
This list is focused on open source software projects relating... see the Free Software Foundation and Open Source Intiative for definitions of Free time
how to send email please give me details with code in jsp,servlet
how to send email please give me details with code in jsp,servlet how to send email please give me details with code in jsp,servlet
Please help me
Please help me Hi Sir,
please send me the code for the following progrems...
1) all sets are integer type:
input:
set1={10,20,30,40...: "+b);
System.out.println();
Collections.sort(l);
System.out.println("A U B
Open Source E-mail
code for complete control.
POPFile: Open Source E-Mail...Open Source E-mail Server
MailWasher Server Open Source
MailWasher Server is an open-source, server-side junk mail filter package
help please
thing i want to imp in my app.please any one have app of this imp pls send me war file.. Or atleast help me with code here.. I tried checking session alive...help please hi i am done with register application using jsps
Open Source web mail
Open Source web mail
Open
Web Mail Project
Open WebMail... Outlook to Open
WebMail.Open WebMail project is an open source effort made... WebMail project is working to accomplish.
Open Source
Open Source Content Management
code for open-source CMSes is freely available so it is possible to customize... money pit?" The article prodded me to learn more about open source CMSs... application development.
Best Open Source Content
online shoping portal
online shoping portal Hello Sir/Madam ,
I am working on a project online shopping portal in Java , I want to share my screen with my friend to show my selected things , can you help me out?what should i do
Download and Build from Source
Download and Build from Source
Shopping cart... Of An Application
Download Source Code...
Shopping Cart Features
Database Design
Creating Data Access Object
Open Source Images
application of open source technology. Having recently developed several database...Open Source Images
Open
Source Image analysis Environment
TINA (TINA Is No Acronym) is an open source environment developed to accelerate
Joomla Shopping Cart Package
Joomla Shopping Cart Package
We offer one of the best packages in IT world in this section. Our shopping cart package include all essential components... to the client’s online shopping website requirements. Our shopping cart
Open Source VoIP
calls.
Asterisk, an open-source application that provides all... the Asterisk open source PBX phone system has given me some hope that we?re returning...Open Source VoIP/TelePhony
Open source VoIP/Telephony
One
please send me javascript validation code - Java Beginners
please send me javascript validation code hallo sir , please send me java script code for this html page.since i want to do validation.i am a new user in java ....please send me its urgent
Open Source E-mail Server
code for complete control.
POPFile: Open Source E-Mail...Open Source E-mail Server
MailWasher Server Open Source
MailWasher Server is an open-source, server-side junk mail filter package
Please help me fix this code - MobileApplications
Please help me fix this code
Please help me in this area of code... in the background of the forms in this code
i want to sum all expenses amount...
* and open the template in the editor.
*/
import java.io.IOException
attendance management project source code
attendance management project source code sir i want full ateendance management project please send me source code i am asking so many members... answer so please send code it's very urgent
What to Find in E-Commerce Shopping Cart Software ?
What to Find in E-Commerce Shopping Cart Software
The e-commerce shopping...-commerce shopping cart should be esthetic enough and presentation of each product... more user-friendly, you have to select the right e-commerce shopping cart
facecart
facecart
facecart is unique open source shopping
solution that provides unmatched speed and agility... and application server shopping
cart. face Cart is complete AJAX java 5 EE e-commerce
online shopping code using jsp
online shopping code using jsp plz send me the code of online shopping using jsp or jdbc or servlets plz plz help me
Open Source Version Control
), a popular open-source application within which many developers store code...-only CVS access.
Open
Source code version control...Open Source Version Control
CVS:
Open source version control
CVS
Need Java Source Code - JDBC
on the that textfield for every employees. Can u please tell me how this can be implemented. Hi friend,
Please send me code because your posted...Need Java Source Code I have a textfield for EmployeeId in which
Module In An Application
Modules in a Application
Modules and there feature in the present application are as follows-
Customer Module
Customer Registration
Customer Login
View Products with Buy Options
Search product Category wise
Calculate
open source project
open source project i am a b.tec 3rd year ,i want to work in some open source java project, please suggest me
Open Source Application Server
Open Source Application Server
New Open-Source Application Server
A new open source application server is available for download from WSO2 Inc...;
Open-source application server enters the fray
Enterprise computing has
please send me the answer - JDBC
please send me the answer -difference between DriverManager and DataDourse what is Datasourse? What r the advantages? what is the difference between DriverManager and DataDourse
please help me.
please help me. Please send me a code of template in opencms and its procedure.so i can implement the code.
Thanks
trinath
Open Source content Management System
the $1.2m CMS money pit?" The article prodded me to learn more about open source...
Open Source content Management System
The Open Source Content Management System
OpenCms is a professional level Open Source Website Content
Open Source Servers
Zope is an open source application server for building content management systems... is open source mobile application server software that provides push email...Open Source Servers
Open Source Streaming Server
the open source
Open Source CD
the largest number of software packages.
Open source code... to contain source code from the open-source project LAME, an MP3 encoder and player...Open Source CD
TheOpenCD
TheOpenCD is a collection of high quality | http://www.roseindia.net/tutorialhelp/comment/13842 | CC-MAIN-2014-52 | refinedweb | 2,695 | 64.1 |
Using custom functions to perform Excel calculations offers a greater extensibility within your spreadsheet. For example, you may need to perform calculations using a deeply nested formula, company proprietary formulas from your finance team, or a combination of standard functions. Yes, you can add two cells using a built-in function; however, suppose you want to concatenate string values of two cells. Additionally, you may need to calculate the sum of those cells in a range. In such situations, calculations cannot be handled using a standard built-in function. You’ll need to write a custom function or a user-defined function.
Advantages of custom functions
- They run across all Excel platforms (Win, Mac, Mobile, Office-online).
- They run fast.
- The look and feel are like native Excel functions (e.g., formula tips)
- They can make web service calls
- They can run offline if they don’t depend on the web
- They can run even when the workbook is unattended
While built-in functions may be faster and better in memory usage, custom functions can help you add more extensibility to your Excel sheet's data calculations.
Custom functions in GrapeCity Documents for Excel
The Spread.Net and SpreadJS components have always supported custom functions. With the new GcExcel Service Pack release - 1.5.0.4, we now extend this support to our Documents product line. GrapeCity Documents for Excel (GcExcel) introduces custom functions for spreadsheets in .NET Core targeted applications.
Please visit this help topic to get started with the basics of adding custom functions in Excel spreadsheets using GcExcel.
This tutorial solves a use case where using a custom function would be beneficial. This article will guide you through the steps of solving the problem in a .NET Core application.
Use cases for custom functions
In this example, we'll compare a family’s monthly income vs. monthly expenses. The spreadsheet calculates percentage of income that is spent on household expenses. These calculations are repeated every month with data being replaced with the next month's information.
It can be a challenge to analyze data when the list of family expenses is large, or if the data spans across multiple sheets. It can become difficult to scan through all rows to find out the highest expenses. Suppose a person wants to analyze monthly expenses to know the highest expenditure. This can be easy to solve using a standard function. We can use =MAX(B11:B23) function, which will give the highest expense for a month.
Drilling into the data, say a person wants to explore how they can reduce current expenses. The highest two expenses that month will need to be understood. You may be able to calculate this using some combinations of standard in-built functions. However, to calculate the second highest value, there may be some coding involved. In this case, it's easier to calculate using a custom function.
How to add a custom function to the spreadsheet to calculate the highest two expenses in a month.
Step 1:
In order to add the data for monthly income and monthly expenses, follow the getting started steps to create a basic spreadsheet with GcExcel. At the end of the blog, your spreadsheet will look like this:
Step 2:
Create a class that derives from CustomFunction class.
public class HighestValues : CustomFunction { }
Step 3: Within the class, initialize an instance of the custom function, with the name of the function, the return type, and parameters for the custom function.
public HighestValues(): base("HighestValues", FunctionValueType.Text, new Parameter[] { new Parameter(FunctionValueType.Object), new Parameter(FunctionValueType.Number), new Parameter(FunctionValueType.Number) }) { }
Here, the name of the custom function is Highest Values. The return type for the function would be the list of item names for the expenses, so the return type would be Text value. This function will receive the parameter values for calculating the highest two values.
Step 4:
We’ll define the Evaluate function to find two highest values. This function performs some validations, such as the length of array received and the row, col values in it. Then, it adds the given array to a list of Temp objects (a class that holds the text and number values of range of cells). This list sorts the array and returns the highest two numbers.
public override object Evaluate(object[] arguments, ICalcContext context) { if (arguments.Length < 3) { return CalcError.Value; } var values = arguments[0]; if (values is IEnumerable<object>) { values = (values as IEnumerable<object>).FirstOrDefault(); } object[,] array = values as object[,]; if (array == null) { return CalcError.Value; } int rowCount = array.GetLength(0); int colCount = array.GetLength(1); if (rowCount <= 0 || colCount <= 0) { return CalcError.Value; } int resultCol = (int)(double)arguments[1] - 1; if (resultCol < 0 || resultCol >= colCount) { return CalcError.Num; } int numberCol = (int)(double)arguments[2] - 1; if (numberCol < 0 || resultCol >= colCount) { return CalcError.Num; } List<temp> list = new List<temp>(); for (int i = 0; i < rowCount; i++) { string text = array[i, resultCol]?.ToString(); double number = array[i, numberCol] is double ? (double)array[i, numberCol] : 0; list.Add(new temp(text, number)); } list.Sort((x, y) => { if (x.Number > y.Number) { return -1; } else if (x.Number == y.Number) { return 0; } else { return 1; } }); string result = null; int count = Math.Min(list.Count, 2); for (int i = 0; i < count; i++) { if (result != null) { result += ","; } result += list[i].Text; } return result; } private class temp { public string Text; public double Number; public temp(string text, double number) { this.Text = text; this.Number = number; } }
Step 5:
In static void Main[] function, call the AddCustomFunction() method. Create new GcExcel workbook and open the Excel spreadsheet in the workbook.
Workbook.AddCustomFunction(new HighestValues()); var workbook = new Workbook(); workbook.Open("SimpleBudget.xlsx");
Step 6:
Call the HighestValues custom function and pass the range of cells from which highest values are needed. Then collect the result in a variable and set the result in a cell.
workbook.Worksheets[0].Range["B25"].Formula = "HighestValues(B11:C23, 1, 2)"; var result = workbook.Worksheets[0].Range["B25"].Value; int rowIndex, columnIndex; GrapeCity.Documents.Excel.CellInfo.CellNameToIndex("C25", out rowIndex, out columnIndex); workbook.Worksheets[0].Range[rowIndex, columnIndex].Value = result;
Step 7:
Save your workbook.
workbook.Save("SimpleBudget.xlsx");
Run the application and you will see the highest expenses collected by the custom function in cell C25.
Note: MS Excel does not know any of our custom functions, so after saving to Excel, a #NAME error will be shown in the cell that contains the formula (B25).
Download the complete sample
How will you use this feature? Leave your comment below!
Create Custom Functions with Documents for Excel Download the latest version of GrapeCity Documents for Excel
Create Custom Functions with Documents for Excel
Download the latest version of GrapeCity Documents for ExcelDownload Now! | https://www.grapecity.com/blogs/using-custom-functions-with-an-excel-api-in-net-applications?utm_source=vsgallery&utm_medium=gcexceljava&utm_campaign=vsgallery | CC-MAIN-2020-10 | refinedweb | 1,124 | 50.43 |
This section will provide an introduction to the various concepts, approaches and features that make up SVG. It will probably be quite lengthy. So far, only the following introductory sections have been written:.
SVG supports two intended uses:
The following shows a trivial stand-alone SVG file with no content:
<?xml version="1.0" standalone="yes"?> <svg width="4in" height="3in" xmlns = ''> <!-- Insert drawing elements here --> </svg>
Download this example
The simplest drawings can be described by a sequence of drawing elements. The following example draws a rectangle:
<?xml version="1.0" standalone="no"?> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG April 1999//EN" ""> <svg width="4in" height="3in"> <desc>This is a rectangle </desc> <g> <rect x="20" y="30" width="100" height="80"/> </g> </svg>
Download this example
The following shows how a fragment from the SVG namespace could be interspersed into a parent XML grammar:
<?xml version="1.0"?> <ABC xmlns="" xmlns: <!-- document in the parent namespace --> <svg:svg <svg:rectangle <!-- svg graphic continues --> </svg:svg> <!-- document in parent namespace continues --> </ABC>
Download this example
Drawings done in SVG will be much more accessible that drawings done as image formats for the following reasons:.
The current plan for SVG is to rely on CSS2's Web. | http://www.w3.org/TR/1999/WD-SVG-19990412/intro.html | CC-MAIN-2014-52 | refinedweb | 208 | 57.06 |
Code. Collaborate. Organize.
No Limits. Try it Today.
One of the more useless features of MFC is the ability to add items to a combo box in the resource editor. You might ask "Why?" Clearly it makes life easy. Well, it doesn't. In fact, it can make life impossible. For example, the only condition under which this is useful is if the strings are language-independent text and the whole ComboBox is completely insensitive to being sorted or unsorted. I've seen things like the resource editor adding items like
Black
Blue
Red
Green
and having code that says
switch(((CComboBox *)GetDlgItem(IDC_COLORS))->GetCurSel())
{
case 0: // black
color = RGB(0, 0, 0);
break;
case 1: // blue
color = RGB(0, 0, 255);
break;
}
You can see immediately that this is impossible to maintain; a change in the combo box resource has to be reflected in some unknown and unknowable code. Well, another solution is
#define COLOR_BLACK 0
#define COLOR_BLUE 1
...
switch(((CComboBox *)GetDlgItem(IDC_COLORS))->GetCurSel())
{
case COLOR_BLACK:
}
This is merely syntactic aspartame (not even syntactic sugar) on a bad idea; it changes the problem not one bit. Another solution is to do something like
CString s;
((CComboBox *)GetDlgItem(IDC_COLORS))->GetLBText(s,
((CComboBox*)GetDlgItem(IDC_COLORS)->GetCurSel());
if (s == CString("Black"))
{
color = RGB(0, 0, 0);
}
else if (s == CString("Blue"))
{
color = RGB(0, 0, 255);
}
etc. This has the advantage that at least you are not position-sensitive; but you are language-sensitive. Consider a European distributor who can edit the resource file, and change the strings:
Schwartz
Blau
Rot
Grün
The code fails, for the same reason. If the combo box is sorted, the order is all wrong; if the string compare is used, no color ever matches. An application that uses a combo box should be completely independent of the sort order and the language. Trust me. Been there, done that. You will only regret it.
Essentially, you must never write a ComboBox or a ListBox in which there is any assumption whatsoever about the meaning of a particular offset. The integers returned from GetCurSel are fundamentally meaningless except for retrieving the string data or ItemData of the control. They have no other significance, and to assign other significance to them is poor programming practice.
GetCurSel
I have a class called "CIDCombo" which I use in all such cases. This was invented after the second time I did myself in using the preloaded combo box (Note: just because something is available, it does not mean that it is a good idea to use it!) What CIDCombo does is allow me to specify a pair, a string ID from the string table and a relevant mapped value, in a table. The table is basically
CIDCombo
typedef struct IDData {
UINT id;
DWORD value;
}; // defined in IDCombo.h file
IDData colors [] = {
{IDS_BLACK, RGB( 0, 0, 0)},
{IDS_BLUE, RGB( 0, 0, 255)},
...
{0, 0} // end of table
};<
the core loop is essentially
void IDCombo::load(IDData * data )
{
for(int i = 0; data[i].id != 0; i++)
{
CString s;
s.LoadString(data[i].id);
int index = AddString(s);
SetItemData(index, data[i].value);
}
}
So what happens in my OnInitDialog is:
OnInitDialog
BOOL CMyDialog::OnInitDialog()
{
...
c_Colors.load(colors); // Note: <A href="">no GetDlgItem</A>, ever!
...
}
This has numerous advantages over the initialize-in-resource method:
If the string IDS_BLACK is changed to "Noir" or "Schwartz" or something else, the color value is always RGB(0,0,0). And if the combo box was sorted, or not sorted, it doesn't matter; the color names are always properly matched to their color values. Or control flow settings. Or data bits values. Or whatever. Essentially, an combo box that could be initialized from the resource is better served by this method. I've never found an exception.
IDS_BLACK
RGB(0,0,0)
The class is available on the CD-ROM that accompanies our book (Win32 Programming, Rector & Newcomer, Addison-Wesley, 1997), and an instance of it can be downloaded free from this Web site
Another cool feature of CIDCombo is that it automatically resizes the dropdown list so that if at all possible, all the items always show without a scrollbar. No more silly resizing the dropdown "by hand" in the hopes that everything will fit! You'll always see everything, scrollbar-free, unless the whole selection won't fit in the window. The height of the dropdown is dynamically adjusted to fit as many items as possible in, given the position of the combo box on the screen (it will pop up above the combo box if it needs more space and the combo box is low on the screen).
What makes this really nice is that whenever you want the actual value, you can simply use GetItemData to obtain the value.
GetItemData
COLORREF CMyDialog::getColor()
{
int sel = c_Colors.GetCurSel();
if(sel == CB_ERR)
return RGB(0, 0, 0); // or other suitable default value
return c_Colors.GetItemData(sel);
}
How do you handle more complex information? Well one way is to define a struct for the group of information, for example, a somewhat silly example is a dropdown list that describes bean types and their packers. (The reason it is silly is that this would actually be done from a database, but the idea is to make a simple sample)
struct
typedef struct {
UINT weight;
UINT company;
} BeanDescriptor, *LPBeanDescriptor;
BeanDescriptor kidney = { 16, IDS_REDPACK};
LineDescriptor vegveg = { 12, IDS_HEINZ};
LineDescriptor green = { 14, IDS_GENERIC};
IDData lines [] = {
{ IDS_KIDNEY, (DWORD)&kidney},
{ IDS_VEGETARIAN, (DWORD)&vegveg},
{ IDS_GREENBLOCKS, (DWORD)&green},
{ 0, 0} // EOT
};
To use the data, you need to do the following:
LPBeanDescriptor getBean()
{
int sel = c_Beans.GetCurSel();
if(sel == CB_ERR)
return NULL;
return (LPBeanDescriptor)c_Beans.GetItemDataPtr(sel);
}
How do you select an item? Well, you need the moral equivalent of FindStringExact. In this case, the selection is based on an ItemData comparison. For example:
FindStringExact
int CIDCombo::Select(DWORD value)
{
for(int i = 0; i < CComboBox::GetCount(); i++)
{ /* compare */
DWORD v = CComboBox::GetItemData(i);
if(value == v)
{ /* found it */
CComboBox::SetCurSel(i);
return i;
} /* found it */
CComboBox::SetCurSel(-1);
return CB_ERR;
}
}
You can get my implementation of CIDCombo by clicking the link at the top of the article.
Hamed Mosavi wrote:When control sorts items, indexes remain with items, so for example index 5 can be before 2 after sorting, depending to it's corresponding text while we expect to meet 3 after 2.
Bob Flynn wrote:Thanks for the reply.
Bob Flynn wrote:I think your comments support the author's original comments.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/541/Combo-Box-Initialization?msg=2215877 | CC-MAIN-2014-23 | refinedweb | 1,109 | 52.7 |
Problem
You want to create a class that derives from two or more sources, but Ruby doesn't support multiple inheritance.
Solution
Suppose you created a class called Taggable that lets you associate tags (short strings of informative metadata) with objects. Every class whose objects should be taggable could derive from Taggable.
This would work if you made Taggable the top-level class in your class structure, but that won't work in every situation. Eventually you might want to do something like make a string taggable. One class can't subclass both Taggable and String, so you'd have a problem.
Furthermore, it makes little sense to instantiate and use a Taggable object by itselfthere is nothing there to tag! Taggability is more of a feature of a class than a fullfledged class of its own. The Taggable functionality only works in conjunction with some other data structure.
This makes it an ideal candidate for implementation as a Ruby module instead of a class. Once it's in a module, any class can include it and use the methods it defines.
require 'set' # Deals with a collection of unordered values with no duplicates # Include this module to make your class taggable. The names of the # instance variable and the setup method are prefixed with "taggable_" # to reduce the risk of namespace collision. You must call # taggable_setup before you can use any of this module's methods. module Taggable attr_accessor :tags def taggable_setup @tags = Set.new end def add_tag(tag) @tags << tag end def remove_tag(tag) @tags.delete(tag) end end
Here's a taggable string class: it subclasses String, but it also includes the functionality of Taggable.
class TaggableString < String include Taggable def initialize(*args) super taggable_setup end end s = TaggableString.new('It was the best of times, it was the worst of times.') s.add_tag 'dickens' s.add_tag 'quotation' s.tags # => #
Discussion
A Ruby class can only have one superclass, but it can include any number of modules. These modules are called mixins. If you write a chunk of code that can add functionality to classes in general, it should go into a mixin module instead of a class.
The only objects that need to be defined as classes are the ones that get instantiated and used on their own (modules can't be instantiated).
If you come from Java, you might think of a module as being the combination of an interface and its implementation. By including a module, your class implements certain methods, and announces that since it implements those methods it can be treated a certain way.
When a class includes a module with the include keyword, all of the module's methods and constants are made available from within that class. They're not copied, the way a method is when you alias it. Rather, the class becomes aware of the methods of the module. If a module's methods are changed later (even during runtime), so are the methods of all the classes that include that module.
Module and class definitions have an almost identical syntax. If you find out after implementing a class that you should have done it as a module, it's not difficult to translate the class into a module. The main problem areas will be methods defined both by your module and the classes that include it: especially methods like initialize.
Your module can define an initialize method, and it will be called by a class whose constructor includes a super call (see Recipe 9.8 for an example), but sometimes that doesn't work. For instance, Taggable defines a taggable_setup method that takes no arguments. The String class, the superclass of TaggableString, takes one and only one argument. TaggableString can call super within its constructor to trigger both String#initialize and a hypothetical Taggable#initialize, but there's no way a single super call can pass one argument to one method and zero arguments to another.
That's why Taggable doesn't define an initialize method.[1] Instead, it defines a taggable_setup method and (in the module documentation) asks everyone who includes the module to call taggable_setup within their initialize method. Your module can define a _setup method instead of initialize, but you need to document it, or your users will be very confused.
[1] An alternative is to define Taggable#initialize to take a variable number of arguments, and then just ignore all the arguments. This only works because Taggable can initialize itself without any outside information.
It's okay to expect that any class that includes your module will implement some methods you can't implement yourself. For instance, all of the methods in the Enumerable module are defined in terms of a method called each, but Enumerable never actually defines each. Every class that includes Enumerable must define what each means within that class before it can use the Enumerable methods.
If you have such undefined methods, it will cut down on confusion if you provide a default implementation that raises a helpful exception:
module Complaint def gripe voice('In all my years I have never encountered such behavior…') end def faint_praise voice('I am pleased to notice some improvement, however slight…') end def voice(complaint_text) raise NotImplementedError, "#{self.class} included the Complaint module but didn't define voice!" end end class MyComplaint include Complaint end MyComplaint.new.gripe # NotImplementedError: MyComplaint included the Complaint module # but didn't define voice!
If two modules define methods with the same name, and a single class includes both modules, the class will have only one implementation of that method: the one from the module that was included last. The method of the same name from the other module will simply not be available. Here are two modules that define the same method:
module Ayto def potato 'Pohtayto' end end module Ahto def potato 'Pohtahto' end end
One class can mix in both modules:
class Potato include Ayto include Ahto end
But there can be only one potato method for a given class or module.[2]
[2] You could get both methods by aliasing Potato#potato to another method after mixing in Ayto but before mixing in Ahto. There would still only be one Potato#potato method, and it would still be Ahto#potato, but the implementation of Ayto#potato would survive under a different name.
Potato.new.potato # => "Pohtahto"
This rule sidesteps the fundamental problem of multiple inheritance by letting the programmer explicitly choose which ancestor they would like to inherit a particular method from. Nevertheless, it's good programming practice to give distinctive names to the methods in your modules. This reduces the risk of namespace collisions when a class mixes in more than one module. Collisions can occur, and the later module's method will take precedence, even if one or both methods are protected or private. | https://flylib.com/books/en/2.44.1/simulating_multiple_inheritance_with_mixins.html | CC-MAIN-2021-39 | refinedweb | 1,142 | 61.46 |
This class has all the configuration of snapping and can return answers to snapping queries. More...
#include <qgssnappingutils.h>
This class has all the configuration of snapping and can return answers to snapping queries.
Internally, it keeps a cache of QgsPointLocator instances for multiple layers.
Currently it supports the following queries:
Indexing strategy determines how fast the queries will be and how much memory will be used.
When working with map canvas, it may be useful to use derived class QgsMapCanvasSnappingUtils which keeps the configuration in sync with map canvas (e.g. current view, active layer).
Definition at line 47 of file qgssnappingutils.h.
Definition at line 103 of file qgssnappingutils.h.
Constructor for QgsSnappingUtils.
Definition at line 24 of file qgssnappingutils.cpp.
Definition at line 31 of file qgssnappingutils.cpp.
Deletes all existing locators (e.g. when destination CRS has changed and we need to reindex)
Definition at line 51 of file qgssnappingutils.cpp.
The snapping configuration controls the behavior of this object.
Emitted when the snapping settings object changes.
The current layer used if mode is SnapCurrentLayer.
Definition at line 99 of file qgssnappingutils.h.
Gets extra information about the instance.
Definition at line 528 of file qgssnappingutils.cpp.
Find out which strategy is used for indexing - by default hybrid indexing is used.
Definition at line 114 of file qgssnappingutils.h.
Query layers used for snapping.
Definition at line 166 of file qgssnappingutils.h.
Gets a point locator for the given layer.
If such locator does not exist, it will be created
Definition at line 37 of file qgssnappingutils.cpp.
Definition at line 94 of file qgssnappingutils.h.
Called when finished indexing a layer with snapToMap. When index == count the indexing is complete.
Reimplemented in QgsMapCanvasSnappingUtils.
Definition at line 214 of file qgssnappingutils.h.
Called when starting to index with snapToMap - can be overridden and e.g. progress dialog can be provided.
Reimplemented in QgsMapCanvasSnappingUtils.
Definition at line 212 of file qgssnappingutils.h.
The snapping configuration controls the behavior of this object.
Definition at line 477 of file qgssnappingutils.cpp.
Sets current layer so that if mode is SnapCurrentLayer we know which layer to use.
Definition at line 523 of file qgssnappingutils.cpp.
Set if invisible features must be snapped or not.
Definition at line 472 of file qgssnappingutils.cpp.
Sets a strategy for indexing geometry data - determines how fast and memory consuming the data structures will be.
Definition at line 112 of file qgssnappingutils.h.
Assign current map settings to the utils - used for conversion between screen coords to map coords.
Definition at line 513 of file qgssnappingutils.cpp.
Snap to current layer.
Definition at line 496 of file qgssnappingutils.cpp.
Snap to map according to the current configuration.
Definition at line 227 of file qgssnappingutils.cpp.
Snap to map according to the current configuration.
Definition at line 238 of file qgssnappingutils.cpp.
Toggles the state of snapping.
Definition at line 490 of file qgssnappingutils.cpp.
Definition at line 51 of file qgssnappingutils.h. | https://qgis.org/api/classQgsSnappingUtils.html | CC-MAIN-2019-51 | refinedweb | 497 | 53.98 |
have a simple .Net test process that declares a new TcpClient on a local port and then tries to connect to it. This compiles and runs on Windows but fails to run under Mono 4.6.2 on a Raspberry Pi 3. The error is:
System.InvalidOperationException: Bind has already been called for this socket
at System.Net.Sockets.Socket.set_ExclusiveAddressUse (System.Boolean value) [0x00011] in <bd46d4d4f7964dfa9beea098499ab597>:0
at System.Net.Sockets.TcpClient.set_ExclusiveAddressUse (System.Boolean value) [0x00000] in <bd46d4d4f7964dfa9beea098499ab597>:0
at Test.Module1.Main () [0x00008] in <73e78983e4964489ba5dd5feb2953471>:0
If I comment out the clientSocket.ExclusiveAddressUse = False line then it will run as expected however I need to set ExclusiveAddressUse = False as multiple sockets connect to this port. My process is connecting to a server running locally on the same machine.
Test code to replicate this is:
using Microsoft.VisualBasic;
using System;
using System.Collections;
using System.Collections.Generic;
using System.Data;
using System.Diagnostics;
static class Module1
{
public static void Main()
{
// Declare client socket and client stream to get data
TcpClient clientSocket = new TcpClient();
try {
// Connect to socket
clientSocket.ExclusiveAddressUse = false;
clientSocket.LingerState.LingerTime = 0;
clientSocket.Connect("127.0.0.1", 30005);
} catch (Exception ex) {
Console.WriteLine(ex.ToString);
}
}
}
I wan't sure which component to assign this to hence my choice of misc. Apologies if that is wrong.
I can reproduce it, but it is fixed on master. I will track down where it was fixed, to try to backport to 4.8
This is fixed in master because we imported TcpClient from referencesource, which is not in 4.8 and before.
The test case to reproduce is the following:
> using System;
> using System.Net.Sockets;
>
> class Driver
> {
> public static void Main ()
> {
> using (TcpClient client = new TcpClient ()) {
> client.ExclusiveAddressUse = false;
> }
> }
> }
I've just installed 4.9.0 from the nightly build and can confirm that my test process works so this bug can be closed. I couldn't find 4.8 to install.
Any idea how long it will be before 4.8 or 4.9 are released ? Or is there a way to patch the current 4.6 version ?
Fixed in master.
We don't have plans to backport it to earlier versions | https://bugzilla.xamarin.com/49/49161/bug.html | CC-MAIN-2021-25 | refinedweb | 366 | 63.05 |
Using xUnit
Telerik Testing Framework comes with built-in support for xUnit.net 1.8 and higher. The latest release of xUnit can be downloaded here.
Telerik Automation Infrastructure comes with the following features to facilitate integration with xUnit.net:
Telerik Testing Framework comes with a BaseTest base class under its TestTemplates namespace that can be used as the base class for all your Telerik automation tests running under xUnit.net. The base class provides the following integration features:
Telerik settings can be read directly from the app.Config of your test project. This allows you to configure your Telerik tests using the same .config file that you might be using to store your connection strings or other settings for your test suite.
When installing Telerik Testing Framework, a new fully commented xUnit.net item template will be added to your list of available templates. This will enable you to start using Telerik by simply selecting it from the Add->New Item tool menu (or context menu) available to your VS project. You are provided with both a C# and a VB.NET template.
Getting Started Using xUnit.net
In this section we will walk you through the steps to get you started using Telerik Testing Framework with xUnit.net.
Once you have completed installing Telerik Testing Framework on the target machine, start your Visual Studio environment and open your xUnit.net test library in Visual Studio or create a new one if you are starting from scratch.
Once you have created the project, select the project node in the Solution Explorer and right-click. Then select Add->New Item... (NOTE: Do not use Add->New Test)
Visual Studio will pop-up the Add New Item dialog as shown below.
Expand the Test node displayed on the left then select Telerik Testing Framework. You should see four templates as shown in the image above.
Select the xUnit template.
Enter a name for your test and click Add.
At this point, you should have a new test added to your project and you should be ready to go. The template will automatically add a reference of ArtOfTest.WebAii.dll to your project that contains the Telerik infrastructure and all the initialization and clean-up routines will be setup in your new unit test file.
If you haven't already done so, you'll need to manually add a reference to the xunit.dll.
Start writing your automated Telerik Testing Framework unit tests.
Telerik's xUnit Template
Telerik xUnit.net tests inherit from a base test class called BaseTest that lives in the ArtOfTest.WebAii.TestTemplate namespace. The base class, in addition to providing the integration benefits described above, provides:
- Short-cuts to the commonly used objects within your test code. For example, instead of always typing Manager.ActiveBrowser.Find, there is a first class Find object exposed off the BaseTest class that is set to the Manager.ActiveBrowser.Find instance. The following are the objects and their short-cuts that the base class provides:
Takes care of creating the Manager object and setting up all the above short cuts.
Reads the .config file (if available) and reads the Telerik settings from it and initializes Telerik Testing Framework according to these settings.
The base class also offers different options that you might want to choose from depending on the scenarios and your testing environment. The Telerik template installed on your machines, by default uses the following initialization:
Initialize(false);
Initialize(False)
The above initialization initializes the Telerik Testing Framework but does not enable the RecycleBrowser feature. Because xUnit.net does not have the concept of a test fixture setup/teardown the RecycleBrowser feature cannot be used. If that initialization does not work for you, you can choose to pass in different parameters or choose to do your own custom setup of the framework. For example, if you want to override some of the settings from the .config file in one or two of your test cases, you can simply do the following:
// This will get a new Settings object. If a configuration // section exists, then settings from that section will be // loaded Settings settings = GetSettings(); // Override the settings you want. For example: settings.DefaultBrowser = BrowserType.FireFox; // Now call Initialize with your updated settings object Initialize(settings);
' This will get a new Settings object. If a configuration ' section exists, then settings from that section will be ' loaded Dim mySettings As Settings = GetSettings() ' Override the settings you want. For example: mySettings.DefaultBrowser = BrowserType.FireFox ' Now call Initialize with your updated settings object Initialize(mySettings)
For more information on Telerik Testing Framework's settings & configuration please refer to the Settings and Configuration topic.
Starting Automation
[Test] [Description("My simple demo")] public void SimpleTest() { // Launch an instance of the browser Manager.LaunchNewBrowser(); // Navigate to google.com ActiveBrowser.NavigateTo(""); // verify the title is actually Google. Assert.AreEqual("Google", ActiveBrowser.Find.ByTagIndex("title", 0).InnerText); }
<Test(), _ Description("My simple demo")> _ Public Sub SimpleTest() ' Launch an instance of the browser Manager.LaunchNewBrowser() ' Navigate to google.com ActiveBrowser.NavigateTo("") ' verify the title is actually Google. Assert.AreEqual("Google", ActiveBrowser.Find.ByTagIndex("title", 0).InnerText) End Sub | https://docs.telerik.com/teststudio/testing-framework/using-xunit | CC-MAIN-2019-26 | refinedweb | 858 | 56.76 |
_table_32.S and add the following line at the end
".long sys_newcall" (add without double quotes, but the preceding . should)
unistd_32.h
open the file /usr/src/linux-2.6.32.5/arch/x86/include/asm/unistd_32.h
(all the system calls will be defined in this file using #define macro) 336", then add:
"#define __NR_newcall 337" at the end of the list. (337 is the new system call number)
Increment the "NR_syscalls" by 1. So, if NR_syscalls is defined as:
"#define NR_syscalls 337", then change it to:
"#define NR_syscalls 338" (Since we added a new kernel, so the total number of system calls should be incremented)
syscalls.h
open the file /usr/src/linux-2.6.32.5/include/linux/syscalls.h
Add the following line at the end of the file:
"asmlinkage long sys_newcall(int i);" (without double quotes)
Makefile
Full path of the file - /usr/src/linux-2.6.32.5/Makefile
Create a new directory newcall/ under the folder /usr/src/linux-2.6.32.5
and include that path to /usr/src/linux-2.6.32.5/Makefile
open the /usr/src/linux-2.6.32.5/Makefile
and find the "core-y += " and append newcall/ to the path (please see the image below)
newcall.c
Create a new file called newcall.c with full path: /usr/src/linux-2.6.32.5/newcall/newcall.c
/*---Start of newcall.c----*/
#include <linux/linkage.h>
asmlinkage long sys_newcall(int i)
{
return i*10; //the value passed from the user program will be multiplied by 10
}
/*---End of newcall.c------*/
Makefile
Create a new file called Makefile with full path: /usr/src/linux-2.6.32.5/newcall/Makefile
and paste the following line
obj-y := newcall.o
Create userspace files to test the system call
create two files testnewcall.c and testnewcall.h and the full path of the files are
/home/pradeepkumar/testnewcall.c
/home/pradeepkumar/testnewcall.h
testnewcall.c
#include <stdio.h>
#include "testnewcall.h"
int main(void)
{
printf("%d\n", newcall(15)); // since 15 is passed, the output should be 15*10=150
return 0;
}
testnewcall.h
#include<linux/unistd.h>
#define __NR_newcall 337
long newcall(int i)
{
return syscall(__NR_newcall,i);
}
Note: "_syscall1(long, mycall, int, i)" this can be added instead of
long newcall(int i)
{
return syscall(__NR_newcall,i);
}
Macro _syscall1()
_syscall1(long, newcall, int, i)
the importance of the above syscall is
- The name of the system call is newcall.
- It takes one argument.
- The argument is an int named number.
- It returns an long.
Testing the new system call
Step 1: Recompile and install the new kernel so that our system call becomes available to the operating system. go to the kernel folder and give command make
Step 2: Reboot the system
Step 3: Compile and execute the user space C file (testnewcall.c) that we created above. (gcc testnewcall.c and then execute ./a.out)
RESULT: You should see the output as 150. This has been tested on kernel 2.6.32.5.
Source: (The above link uses kernel version 2.6.17 and it uses different set of files)
My post uses a recent kernel 2.6.32.5 and it modifies different set of files.
Any doubts, please query through the comments….
In the "testnewcall.h"...why do u use header file "unistd.h".....as u already defined _NR_newcall 337..
The unistd header defines the syscall system call.
very nice tutorial!!its very friendly to novice programmers of linux
more power!!
""Makefile
Full path of the file – /usr/src/linux-2.6.32.5/Makefile
Create a new directory
Hi there!
after i compiled then reboot
then compiled the testnewcall.c
and then ./a.out
the answer is not 150 but -1 :(
can u please help me thanks
Hi!I finally got it right, thanks a lot to your Great Tutorial
More power to you!
Hi,
Is _syscallN still valid for use in linux 2.35.4. I seem to be running into errors while using this.and when i do look for the code in unistd.h the unistd.h in arch/x86 doesn't have this .Please comment.
Thanks
Ganesh
It returns -1 for kernel 2.6.35.5? Why?
-1 means there may be some error, keep trying to resolve..
It could be because the kernel is 2.6.35 instead of 2-6-32?
gcc testnewcall.c gives me a error:
testnewcall.h error: invalid preprocessing directive #define __NR_newcall 337... Any idea Why?... i have recompiled everything but still it gives me this error
thanks. this helps my project
hi! can we implementing the same system call through module??how can we do?
[...] [...]
what is the meaning of Create a new directory newcall/ under the folder /usr/src/linux-2.6.32.5 and include that path to /usr/src/linux-2.6.32.5/Makefile.Can you please explain in detail
Right on the button for 32-bit, but what if I'm amd64 and need to use unistd_64.h? syscall_table_32.S does not apply, right? Does something else need to substitute for it?
It returns -1 instead of the expected 150.can any1 help,pliz....??
I'm using linux 2.6.32-21
please check the all the steps...
it doesn't work it reply -1 just
I do every things in detail | https://www.nsnam.com/2010/01/implementing-new-system-call-in-kernel.html | CC-MAIN-2021-43 | refinedweb | 894 | 70.5 |
I'm not sure if this is the right forum for this, but here goes. I have designed and implemented an asp.net web app that is working well. But our company uses some 3rd party web software in ASP. Because it is 3rd party, I can't really migrate it into .net without breaking it, or at least invalidating any support they offer on it.
One of the 3rd party ASP pages is customizable, but again, cannot be changed to .net (that would entail migrating global includes that are used on other asp pages as well). Yet I need to incorporate into that ASP some functionality provided by my asp.net web app. Specifically, I need a way to call .net methods from an asp page.
Now some research gave me the following strategy.
1) Create a .net assembly that contains a class that basically wrapped method calls to the web app assembly and compile it.
2) use regasm to write the appropriate class data to the registry, so it can be called as a COM object.
3) from the ASP page use the following code:
set testobject=server.CreateObject("ogwebComLibrary.testObject")
response.Write (testobject.getText())
Now, the response I kept getting when I viewed the ASP page was "object not set to instance" on the 2nd line: response.write(testObject.getText()).
So a bit more research turned up another way to do this.
1) Create a strong named key and apply it to that assembly. This was more complicated because all other assemples (like those associated with the webapp and the MS Data application block) also needed to be compiled with an snk.
2) use gacutil to register it in the global assembly cache
3) create object from ASP page.
Same error message.
On both occassions, I checked the registry and can see the registry keys pointing to the .net object.
Looking at the error message more closely, I can see that the ASP is not having a problem SEEING the .net object. It is having a problem instanciating it. Yet the object simple looks like this:
Code (abbreviated):
namespace ogwebComLibrary
{
public class testObject{
public testObject(){
}
public string getText(){
}
}
}
The object simply sends a processed ASP.NET page (with a single label whose text is set to "label") to an ASP page, stripping out all the html dealing with body, form, viewstate, etc. It will also handle ASP pages. Just a little test. It will really wrap calls to another assembly once I can be sure ASP can call it's methods.
And an ASP.NET page instanciates it fine. So why won't ASP create the object? It must be seeing the class in the registry (and GAC, for that matter) or I'd get a "class not found"-like exception on the "server.createobject()" line.
I tried using regsvr32.exe, no luck.
VS settings are "use com interop services".
the following lines occur just before the class declaration to enable late binding (for ASP).
[ProgId("ogwebComLibrary.testObject")]
[ClassInterface(ClassInterfaceType.AutoDual)]
I'm banging my head against the wall on this. It will not instanciate this object.
any ideas?
Ian Ohlander
iohlander@ogequip.com
There are currently 1 users browsing this thread. (0 members and 1 guests)
Forum Rules | http://www.webdeveloper.com/forum/showthread.php?32691-Instantiate-net-assembly-as-com-from-ASP | CC-MAIN-2014-42 | refinedweb | 542 | 68.06 |
Hi all, For those interested here is a MS project file that builds boost_python. Create a directory under boost/libs/python/build I called mine ./msvc-project. Drop the file into the directory and then load it in DevStudio and build. It will create a ./bin underneath it with boost_python.dll and boost_python_d.dll. David, (if you are listening) given the work in Boost.Build I'm not sure if you are interested in this at all - though I would hazard a guess that there will be more requests. In any case - adding some extra warning switches to the detail/config.hpp makes the build a little easier to watch both for bjam and DevStudio. --- config_old.hpp Sat Nov 09 19:47:06 2002 +++ config.hpp Sat Nov 09 16:19:26 2002 @@ -32,7 +32,11 @@ # define BOOST_MSVC6_OR_EARLIER 1 # endif -# pragma warning (disable : 4786) +# pragma warning (disable : 4786) // disable truncated debug symbols +# pragma warning (disable : 4251) // disable exported dll function +# pragma warning (disable : 4800) //'int' : forcing value to bool 'true' or 'false' +# pragma warning (disable : 4275) // non dll-interface class + # elif defined(__ICL) && __ICL < 600 // Intel C++ 5 -------------- next part -------------- A non-text attachment was scrubbed... Name: boost_python.dsp Type: application/octet-stream Size: 5834 bytes Desc: not available URL: <> | https://mail.python.org/pipermail/cplusplus-sig/2002-November/002154.html | CC-MAIN-2017-30 | refinedweb | 211 | 65.32 |
In this section, you will add model classes that define the database entities. Then you will add Web API controllers that perform CRUD operations on those entities.
Add Model Classes
In this tutorial, we'll create the database by using the "Code First" approach to Entity Framework (EF). With Code First, you write C# classes that correspond to datbase tables, and EF creates the database. (For more information, see Entity Framework Development Approaches.)
We start by defining our domain objects as POCOs (plain-old CLR objects). We will create the following POCOs:
- Author
- Book
In Solution Explorer, right click the Models folder. Select Add, then select Class. Name the class
Author.
Replace all of the boilerplate code in Author.cs with the following code.
using System.Collections.Generic; using System.ComponentModel.DataAnnotations; namespace BookService.Models { public class Author { public int Id { get; set; } [Required] public string Name { get; set; } } }
Add another class named
Book, with the following code.
using System.ComponentModel.DataAnnotations; namespace BookService.Models { public class Book { public int Id { get; set; } [Required] public string Title { get; set; } public int Year { get; set; } public decimal Price { get; set; } public string Genre { get; set; } // Foreign Key public int AuthorId { get; set; } // Navigation property public Author Author { get; set; } } }
Entity Framework will use these models to create database tables. For each model, the
Id property will become the primary key column of the database table.
In the Book class, the
AuthorId defines a foreign key into the
Author table. (For simplicity, I’m assuming that each book has a single author.) The book class also contains a navigation property to the related
Author. You can use the navigation property to access the related
Author in code. I say more about navigation properties in part 4, Handling Entity Relations.
Add Web API Controllers
In this section, we’ll add Web API controllers that support CRUD operations (create, read, update, and delete). The controllers will use Entity Framework to communicate with the database layer.
First, you can delete the file Controllers/ValuesController.cs. This file contains an example Web API controller, but you don’t need it for this tutorial.
Next, build the project. The Web API scaffolding uses reflection to find the model classes, so it needs the compiled assembly.
In Solution Explorer, right-click the Controllers folder. Select Add, then select Controller.
In the Add Scaffold dialog, select “Web API 2 Controller with actions, using Entity Framework”. Click Add.
In the Add Controller dialog, do the following:
- In the Model class dropdown, select the
Authorclass. (If you don't see it listed in the dropdown, make sure that you built the project.)
- Check “Use async controller actions”.
- Leave the controller name as "AuthorsController".
- Click plus (+) button next to Data Context Class.
In the New Data Context dialog, leave the default name and click Add.
Click Add to complete the Add Controller dialog. The dialog adds two classes to your project:
AuthorsControllerdefines a Web API controller. The controller implements the REST API that clients use to perform CRUD operations on the list of authors.
BookServiceContextmanages entity objects during run time, which includes populating objects with data from a database, change tracking, and persisting data to the database. It inherits from
DbContext.
At this point, build the project again. Now go through the same steps to add an API controller for
Book entities. This time, select
Book for the model class, and select the existing
BookServiceContext class for the data context class. (Don't create a new data context.) Click Add to add the controller.
This article was originally created on June 16, 2014 | http://www.asp.net/web-api/overview/creating-web-apis/using-web-api-with-entity-framework/part-2 | CC-MAIN-2014-42 | refinedweb | 601 | 59.19 |
MRU4Clipboard Crack With Full Keygen Free [March-2022]
MRU4Clipboard is a small software application whose purpose is to help you automatically monitor your clipboard content for new items and store the information with the aid of an integrated history. It can be deployed on all Windows versions out there. The program lets you make use of multiple text entries simultaneously.
Clipboard Cleaner.
Clipboard Cleaner (Simple) License Key. have changed, refactored, etc. Some of the code in the code samples is not very nice, but you’ll get the idea.
A:
One approach to the problem of side-effects is defer/discard.
import defer
def f():
x = 0
for i in xrange(20):
yield i
x = i + 1
def g():
x = 0
for i in defer.discard(xrange(20)):
yield i
x = i + 1
list(g())
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
United States Court of Appeals
MRU4Clipboard Crack+
Monitor your clipboard for new text records and save them automatically to a list.
Features:
Monitor clipboard content for new text records;
Automatically save the data;
Personalize entries for later viewing with favorite text entries;
Automatic restoring of the default entry order;
Customizable sorting order;
Automatic copying and deleting of clipboard entries;
Automatic saving of clipboard records in a list;
History management;
Support for 32-bit and 64-bit versions of Windows;
Support for 32-bit and 64-bit versions of Windows 7 and Vista;
No configuration or installation needed;
Fast and no cost.
MRU4Clipboard Crack Mac Screenshots:
What is new in official MRU4Clipboard 5.1 software version? – Different software update. What is expected in the future? Newly-made MRU4Clipboard 5.2 be downloaded from current page, we also looking forward to unconfirmed 5.3 release build. You may download ruelhere.exe directly, estimated download time by ISDN or CDMA [~128 kbit/s] is 0:01:14. Just write the reviews of the MRU4Clipboard. System requirements are unspecified. Click on the download button and just wait…..
System requirements:
Windows XP/Vista/Win 7/8 and 10.
Program has been scanned and verified by the several antivirus and anti-spyware applications and MRU4Clipboard found to be clean. No guide or MRU4Clipboard tutorial available.
System requirements:
Windows XP/Vista/Win 7/8 and 10.
Cerdoza Free Edition is a simple, yet useful and powerful software that will enable you to draw or copy the contents of a web page on your computer. What you will need to do is simply drag the contents of any web page that you want to copy right onto the program and click “copy”. Cerdoza will then give you the option of saving the document as a graphic file on your hard disk.
For a more extensive tutorial, including some of the best features and how to’s check out Cerdoza Free Edition in action!
Key Features
Cerdoza Free Edition is a light and easy-to-use application, that allows you to copy any web page, that is, any text, graphic, or image from any browser, into a
09e8f5149f
MRU4Clipboard Crack License Keygen
File size: 6.54 MB
MRU4Clipboard Review factor in the court’s decision to reduce his sentence. However, considering the fact that the district court is required to weigh the sentencing factors set forth in section 3553(a) and a court’s statement of reasons for a particular sentence does not preclude consideration of unraised factors, the district court’s reasoning as to why it departed from the Guidelines range is relevant and therefore we should consider it. See Rita v. United States, ___ U.S. ___, 127 S.Ct. 2456, 2470-71, 168 L.Ed.2d 203 (2007).
The statement of reasons also indicates that the district court did not place undue weight on Mendez-Palou’s history and characteristics, the third basis for a variance. Mendez-Palou had been repeatedly deported following convictions for aggravated felonies, and it is possible that the district court viewed this history as an attempt to unlawfully reenter the United States, and that his sentence reflected the district court’s view that Mendez-Palou would reenter the United States if not deported. However, the district court explicitly stated that it was not considering the continued recidivism of the defendant, but instead was taking into account the need to provide just punishment. As such, it is unlikely that the district court considered the defendant’s repeated illegal reentries in determining the specific sentence imposed.
Because the district court clearly did not consider the repeated recidivist nature of the defendant, we have no basis to find that the district court would have imposed a more lenient sentence but for that consideration. For this reason, I believe that it would be improper for us to reduce the sentence pursuant to United States v. Booker and Kimbrough, as the district court imposed the Guidelines sentence because it viewed it as the sentence that was appropriate, and not in light of the defendant’s status as a recidivist.
Notes:
2
I would note that I do not believe there is sufficient evidence to support the conclusion that the district court was not aware of the scope of its authority to vary from the Guidelines. The district court clearly explained that it had considered the evidence presented in the sentencing hearing and that it had adopted the Recommendation of the probation officer in determining the final sentence. See Rita, 127 S.Ct. at 2469 (“`Where the defendant or prosecutor presents nonfrivolous reasons for imposing a different sentence… the judge will normally go further and
What’s New in the MRU4Clipboard?
Manage multiple clipboard records with MRU4Clipboard (Multiple Records of clipboard). It supports:
Record clipboard items automatically and store them
Keep your clipboard in order
Keep all records at the same position
Show the list in a listbox
Works with the following formats:
Text (plain format)
Html
Any text file
PDF
Image file
List file
Switch between lists:
Record
Clear list
Save list
Slim view
Copy all records to the clipboard
Copy selected record to the clipboard
Copy all favorites to the clipboard
View
Clipboard Manager
Multiple Log/Display Windows
Split window
Process/Windows Spy
Clock/Calendar
Bar Charts
Powerpoint/Excel
Command/Text
MRU4Clipboard Pros/Con:
# Support for Clipboard from Windows XP
# Support all clipboard formats, a small tool
# Change history can be saved to a text file
# Multiple clipboard records (each records has 1 history)
# Supports general clipboard, into or from a specific application
# Run silently in the background
# Shows the record in a list, all records in a listbox, or in a textbox
# Download source code here
#
6.09 MB
LiteStudio Pro 11Publisher: LiteStudio Soft Date Released: 2015-08-01LiteStudio Pro is a powerful application development tool for MS-Windows platform. It helps users to create applications in easy way.
7.54 MB
Outlook Express 2.8Publisher: Michael CiossoDate Released: 2011-07-07Outlook Express is a free replacement for Outlook because it was designed to work with Microsoft Outlook Express 5.5, 5.0, 4.0 and 3.0 (32 bit and 64 bit), but there is no support for Vista, Windows 8 and Windows 8.1, but you can install it on these versions and the advantage is that you will not have the message and mail problems you encounter.
1.62 MB
RMProPublisher: TechsexDate Released: 2008-01-01RMPro is a backup management and automation tool. It is designed to make backups easier and more effective for a single computer or large organization with multiple computers.RMPro can create backups by file and folder basis or on the schedule.RMPro can backup locally and over network via FTP or WebDAV. The targeted backup
System Requirements For MRU4Clipboard:
1GB RAM
OS: Windows 7, Windows 8
DirectX: 9.0
HDD Space: 12 GB
Additional Notes:
A controller can be purchased for use with the game.
A texture mod needs to be used in order to have the environment look more similar to the environment in the game.
War of Attrition is the standalone remaster of the 1980s strategy game War of the Lords (WOTL) which was released for the Mac OS a few years ago, and we | https://www.luthierdirectory.co.uk/mru4clipboard-crack-download-for-windows/ | CC-MAIN-2022-33 | refinedweb | 1,366 | 51.58 |
Stefano Sabatini <stefano.sabatini-lala at poste.it> writes: > On date Saturday 2011-02-12 17:55:06 +0000, M?ns Rullg?rd encoded: >> Stefano Sabatini <stefano.sabatini-lala at poste.it> writes: >> >> > Improve readability. >> > --- >> > ffplay.c | 30 +++++++----------------------- >> > 1 files changed, 7 insertions(+), 23 deletions(-) >> > >> > diff --git a/ffplay.c b/ffplay.c >> > index de2a594..bd0ec73 100644 >> > --- a/ffplay.c >> > +++ b/ffplay.c >> > @@ -724,31 +724,15 @@ static void video_image_display(VideoState *is) >> > is->dtg_active_format = is->video_st->codec->dtg_active_format; >> > printf("dtg_active_format=%d\n", is->dtg_active_format); >> > } >> > -#endif >> > -#if 0 >> > switch(is->video_st->codec->dtg_active_format) { >> >> Why is this #if 0 block there at all? > > Legacy code, I don't have the time to inquire now but with the compact > notation I get less "distracted" by it. Is there any reason to believe it is useful at all? Does it even compile if enabled? -- M?ns Rullg?rd mans at mansr.com | http://ffmpeg.org/pipermail/ffmpeg-devel/2011-February/103888.html | CC-MAIN-2016-40 | refinedweb | 148 | 55.3 |
Case name.
The first version of the code I wrote, to get around this, was ugly:
def import(folder) files = [] files << Dir.glob(folder + "**/*.HL7") files << Dir.glob(folder + "**/*.hl7") files << Dir.glob(folder + "**/*.Hl7") files << Dir.glob(folder + "**/*.hL7") files.flatten.each { |file| LabResult.import(file) } end
After some googling, I found this post where someone is asking for the same thing, wondering if there is a way to do case insensitive Dir.globs. About halfway down the list of replies, I found what I was looking for.
Now my code is much less ugly, and much less intuitive:
def import(folder) files = Dir.glob(folder + "**/*.hl7", File::FNM_CASEFOLD) files.each { |file| LabResult.import(file) } end
Really?! File::FNM_CASEFOLD?! Yeah… so much for Ruby’s “intuitive” and “natural language” syntax. At least it works. Now I just have to add a comment to my code to let people know what this is doing.
Any know of a better, more intuitive way of doing case insensitive Dir.globs? | https://lostechies.com/derickbailey/2011/04/14/case-insensitive-dir-glob-in-ruby-really-it-has-to-be-that-cryptic/ | CC-MAIN-2020-16 | refinedweb | 168 | 70.19 |
“We. This session from GOTO London will look at various approaches and tools that you can use to visualize, document, and explore your software architecture in order to build a better team.
Introduction (0:00)
Today we’ll be taking a short tour of visualizing, documenting, and exploring software architecture, starting with a short example:
Example
Imagine we’ve invented teleporters, and I teleport you here.
Where are you? France? Close, but not quite.
This is what happens when new people are added into a code base. They get thrown into the middle of a project and end up somewhat lost.
How do we solve this problem? How do we figure out where we are?
We zoom out, we use technology, we open up the maps on our phone and start zooming out.
This may be slightly better, but we still aren’t quite sure where we are, so we need to zoom out again.
We may be able to see where we are, but there is a lot of cluttering on the map, and it’s not very clear to see whats going on. In a program like Google Maps, we can reduce the amount of information to make the picture a bit clearer.
Now that the image is a bit clearer, we can start to see the names of places and where some of the bits and pieces of the landscape are. However, if you’ve never hear of Jersey before, this is still kind of useless forcing you to zoom out a few more times.
Now that we’ve zoomed all the way out, you can see that Jersey is a small island of the coast of France. If you come and visit Jersey, which you should because it’s lovely, when you come through the airport you can get a map. The map is divided up and tells you enough information to get around and find the major sites. It doesn’t tell you everything however. It won’t list every street, with every building on every street either. There may be a zoomed in area of the main city with more detail, but it’s a representation and isn’t completely accurate.
The thing these maps have in common is that they show points of interest. It shows you what you would really want to see if you visit Jersey. Contrast this map with an ordnance survey map, which shows a very detailed version of Jersey with contours of different types of land. To read this map you would need some sort of intelligence and some help getting started to interpret it.
Both maps however show selected highlights and points of interest.
With these points of interest, there are history lessons that come with them along with other detailed information about the location, which could all be found in guidebooks.
Visualization(4:08)
I run a visualization workshop with software teams all over the world and we give people requirements and then give them 90 minutes to draw some pictures, and groups come up with all kinds of crazy diagrams:
I’m sure these types of diagrams all look familiar.
Oftentimes when running the workshop I hear people say as they draw:
“…this doesn’t make sense but we’ll explain it later when we do our presentation or something…“
This is fine in some instances, but we don’t always present our own diagrams. I like to remind teams of this by having two teams swap diagrams, and because they weren’t part of the conversation creating those diagrams, they have no idea what’s going on. Teams can’t understand the color coding or the shapes or the lines and basically none of the notation makes any sense whatsoever.
Teams always say it was an easy exercise but yet diagrams are always a mess. We don’t really know what to draw, the levels of details, shapes, notations, whether we should use UML etc…
Who still uses UML?
I’ve asked this question around the world and UML is massively falling out of fashion. I have no evidence to back any of this up, this is all completely anecdotal, but I’m seeing more and more teams who have no UML skills. Personally, I use UML, but I use it sparingly for small parts of a software system such as a class hierarchy.
If you Google search a software architecture diagram you get this:
Page after page after page of essentially pretty-colored block pictures, the sorts of diagrams you can create in Visio or PowerPoint, and these are exactly the types of diagrams I see when I go and visit organizations, and half the pictures simply don’t make any sense.
I’ve run this workshop for about 10,000 people now, all around the world, and nobody does it sufficiently well the first time around.
Notation Tips(6:58)
One of the great things Agile has done is it’s made us more visual. Whenever I go and visit Agile organizations, they have camera boards, storywalls and information dashboards, demonstrating that we have become awesome at visualizing and processing the way we work, however we have totally forgotten how to draw pictures of the things we are building. This is simply about good communication. If you want to move fast as a team, if you want business agility, then you need to communicate well.
Get more development news like this
Here are some really simple tips around notation.
- Put titles and pictures
- Make sure your arrows are annotated
- Make sure your arrows point one way
Responsibilities
The notation around drawing architecture diagrams is easy to fix. One of the key points is responsibilities. We often joke that naming is hard in software, so it doesn’t make sense that most of our architecture diagrams are essentially just a collection of named boxes because this creates a huge amount of ambiguity. A simple fix for this is to simply add more text to your diagrams.
Here’s a really simple example.
Here are two versions of the same diagram. The one on the right has more text that allows us to see things such as the responsibilities of the building blocks.
In terms of content, you can’t show everything on a single picture when drawing architecture diagrams, which is why people talk about things like views, viewpoints and perspectives. Philippe Kruchten’s 4+1 model is a book that details how to take these points into consideration.
Logical architecture diagrams
A problem with a lot of view catalogs is that they have a logical view of a system separate to the development view of the system. The logical view is often either the functional, logical, or conceptual building blocks of the system, and then there is a separate entity that refers to how we are building the system.
I often find, when I go to organizations, that their nice, fluffy, logical architecture diagrams never match the code they are made to represent. If an architecture diagram doesn’t match the code it’s simply lying to me. George Fairbanks calls this the “model - code gap.”
When discussing architecture, we use abstract concepts such as modules, components or services, but we don’t have these things in our programming languages. For example, in Java there is no ‘layer’ keyword, but we create components and layers by assembling classes, interfaces and packages together.
The ultimate goal of having a discussion like this is to eventually have a set of diagrams that actually reflects code. Before we can tackle that problem however, we need to deal with the fact that we still don’t have any sort of consistent, standard language for talking about software architecture.
Examples(10:19)
The image above is clearly a map of London. If you look at the map you are able to recognize the blue thing that stretches across the map as a river, specifically the River Thames. We all know that a river is a body of water flowing in some direction. Based on this information we can now go find other rivers.
We can see in this next picture that we are looking at the floorplan of a bathroom. In this floorplan it is easy to find the toilet, and we all know what a toilet is and can find other toilets on other floorplans.
There a few ways for representing circuits in electrical engineering. The top image has the cartoon pictorial representation of circuit elements and the bottom shows a schematic version. Most people can identify the squiggly line as a resistor and know that a resister slows down currents, and then can go forward to use this knowledge to find other resistors and build more circuits. An engineer could go through a box of components and find the resistors and use the color coding to identify how strong of a resistor it is.
To ramp up the complexity of this example lets take a look at the following diagram.
Looking at this diagram we really can’t tell what it is. There are two UML diagrams, where the boxes represent components. Components are logical, abstract, functional, building blocks. In this example one box is a stereotyped database and another is a JDBC interface which makes it sound like a database component. Others represent UIs or applications, and items in the middle represent business components. Where do these components run? Do they run in the database as part of the app? Are they micro services? This diagram is open to lots of interpretation. If this diagram had more text, we would at least know what these item were.
To look at this simply, imagine we are building a simple system consisting of a web app and a database. The word component means part of. For some people, the web app is a component for the entire system, but for other people, a logging component is a component of the web app. The same word is being used for different levels of abstraction.
A common discussion is the lack of a ubiquitous language between the developers and the business people, but we don’t even have that language among ourselves. UML tried to create this language, but it was too much. It attempted to create a standard notation and a standard level of abstraction but failed on both counts. I think that the industry needs a standard set of abstractions, and eventually create something like electrical engineering where there is a standard set of symbols to represent things, however we need to create the language first. With this, the language that we create needs to reflect the technology that we are using, merging the logical and developmental views back together creating real terminology that maps to real technology.
Container Model(13:49)
I don’t know how we could achieve an ubiquitous on a global scale, but within the boundaries of this presentation, I can show you some techniques. When I am discussing a software system, that software system is made up of containers where a container is simply something that stores data or runs code. To relate that to real terms, a container could be a web app, a Windows service, a database schema, etc. If you open a container, they are made of different components where components refer to something running inside a run-time environment. Essentially, it is a cohesive grouping of stuff with a clear interface when we are done. Since I mostly deal with Java and C#, components are built from classes. This creates a nice hierarchical tree structure to describe the static structure of a software system. If you’re using JavaScript, this makes no sense, so perhaps you use modules and objects or functions as components. The same can also be said with functional languages. Perhaps you are using a database technology, this can be adapted to components and stored procedures. You take the same hierarchical approach and map it to the same tech that you are using.
The ultimate goal is to create a single, simple static model of a software system on all levels; from viewing the system as a black box down to the code, with levels in between. Once you have defined language as we have, it makes creating diagrams really easy.
The C4 Model
The C4 model is a context diagram where you zoom in to see the containers, zoom in further to see the components, and even go down to code if you’d like, however I don’t usually do this especially if I’m trying to describe and existing code base quickly.
Tech Tribes
I created a site called Tech Tribes, which is just simple content aggregated for the local tech industry. Here is a context diagram for Tech Tribes, representing the system I built.
There are different types of users and different system dependencies. If this was an interactive Google Map, we could select and pinch to zoom in.
We see the containers inside the system boundary. If we select a container, we can pinch to zoom in, and show the components inside it, and so on and so forth.
It is a simple hierarchical diagram that maps onto the language and ultimately we get to the code. Ideally there is a nice clean mapping between all of these layers, and this would actually represent what the code looks like.
Basically, diagrams are maps and you need different types of maps depending on how much information you have about what you want to learn about, or the audience you’re speaking to. For presenting to business and other non- technical people, a high level view works well. If you are showing your system to a developer, something low level would be good.
I don’t want you to take away any tips around notation. This is the notation that I use just because it’s very simple, and I tend to use things like color coding and shapes to supplement an existing diagram that already makes sense.
This is two representations of the same diagram. One has shapes and one doesn’t. Fundamentally, there is no additional information on the one with shapes, yet it is more appealing. It’s worth noting that there are a lot of other things worth considering when trying to describe your software architecture. Philippe Kruchten’s work as well as Eoin Wood’s book has a lot of good information regarding views and viewpoints.
Using the C4 model is not a design process, it is simply a set of diagrams that you can use during an up front design exercise or even to use retrospectively. If you have an existing code base with no documentation, this is a really good starting point.
What Tools Should I Use?(17:39)
A common question I receive is what tools I recommend, and don’t say Visio. This is because it’s just a set of boxes and lines, and is simply a general purpose diagramming tool not a modeling tool. If you look at the building industry, it doesn’t use Visio. They use three dimensional models of a building and surface different views from it. The irony of course is that we as developers build these tools for them, but don’t have any for ourselves.
Structurizr
I’m trying to solve lots of problems here and one of my approaches is a set of tooling called Structurizr. Structurizr is part SaaS product, part open source. In its very simplest form, you can write a simple, demanding specific language to create diagrams. Really this is an implementation of the C4 model with people, software systems, containers, and components I showed earlier. Structurizr is great for sketching up something small and simple, a single diagram at a time.
If I have an existing code base, why can’t I just auto-generate diagrams?
- You just get chaos.
Is that because your code base is chaos?
- Sometimes, but often not.
Often your diagram is just showing too much detail. Structurizr is all cloud based and many companies don’t want to send their entire architecture into the cloud, but many of my potential customers like Structurizr. In order for companies to still get the benefits I have recently built a simple on premise API because Structurizr is essentially a JavaScript app running in the browser. After installing the API, users can store their data locally. The API is only about 1,000 lines of code and here is the UML diagram for it.
It’s not particularly useful is it? The diagram is showing us all of the code level elements and all of the relationships between them and it’s hard to pick out what the important parts of this code base are. Even with less than 1,00 lines the diagram is already useless, imagine if the code base was 100,000 or 1,000,000 lines; the diagram would become unreadable. This is simply because diagramming tools see code not components; they are unable to “zoom out” and show the user bigger abstractions, again creating a model-code gap.
Software engineers have been dealing with this problem for a long time. A paper was published in the 1990’s that noted that if you ask and engineer to draw a picture of their software, they will create a nice high level view. However if you reverse engineer a diagram from the code, the result is completely different. The reversed engineered diagram will be very accurate, but it’s not how the engineer thinks.
This age-old problem ties back to a simple question:
What is a component?
What is a component?(21:00)
If I want to draw a component diagram, I need to understand what a component is. Referring back to my class diagram, the “WorkspaceComponent” and “APIServlet” boxes are what I would consider the components of this API. There is a Java server that handles the API requests and a workspace component dealing with the the structure of the workspaces. You may have heard of “server-less,” this is “framework-less.” This is one of the simplest implementations you could possibly write. There are two major components and the rest of the code are parts of these components.
We have to assume that the code is the single point of truth; the code is the final embodiment of all the architectural ideas. If you were to give me your code base, could I generate a context diagram by looking for references to people and software systems in your code base? The answer is no, because we don’t have that information in our code base most of time. The same can be said for containers. There simply isn’t enough information in the code to be found by scraping data from the code base.
It is at the component level that I really want to generate diagrams automatically, because it is the most volatile and changes frequently. To help this, George Fairbanks says that we should adopt an architecturally-evident coding style.
Architecturally-evident Coding Style(22:29)
Architecturally-evident coding is embedding information into your code base so that your code base reflects your architectural ideas and intent. Concretely, it is simple things like using naming conventions. For example, if you have a logging component in your code base, make sure you have something called “logging component.” Another example could be namespacing or packaging convention, where there is one fold, one namespace or one package per box on the diagram. It could also be in machine readable metadata, annotating things that are important such as labeling components. By using this, we can then extract useful information form the code base and supplement that information where that information isn’t possible. Ideally, we as an industry should move away from drawing diagrams in programs like Visio.
Architecture Description Language(23:30)
An architecture description language, a term with most readers will probably be unfamiliar since it hasn’t entered the mainstream industry, is a textual description of something such as the static structure of a software system. There are many languages out there such as Darwin or Koala, however the syntaxes are horrible, essentially meaning developers have to learn another strange looking language in order to describe the piece of software that they are building. However, this is a fantastic concept, because we are no longer dealing with diagrams, but with text. As developers, we like text; we can “diff” text and have tooling to support text. We need to take all the concepts discussed above and create an architecture description language using general purpose code that we are using to build our systems.
This is what the other piece of Structurizr is. There are two Open Source libraries, which are a small implementation of the C4 elements discussed earlier. There are several classes in each library, one for Java and one for .NET, they let you create people, software systems, containers, and components and bind them together to describe your software architecture. This is how we would use this to describe my API from earlier:
// software systems and people Person softwareDeveloper = model.addPerson(Location.Internal, "Software Developer", "A software developer."); SoftwareSystem structurizr = model.addSoftwareSystem(Location.External, "Structurizr", "Visualise, document and explore your software architecture."); SoftwareSystem structurizrApi = model.addSoftwareSystem(Location.Internal, "Structurizr API", "A simple implementation of the Structurizr API, which is designed to be run on-premises to support Structurizr's on-premises API feature.”); softwareDeveloper.uses(structurizr, "Uses"); structurizr.uses(structurizrApi, "Gets and puts workspace data using”); // system context view SystemContextView contextView = views.createSystemContextView(structurizrApi, "Context", "The system context for the Structurizr API."); contextView.addAllElements();
A software developer uses my Structurizr product, which uses the API to store information locally. This code reflects that model. We can then use that code to create a system context view by adding the appropriate things to the diagram. The resulting picture is a very simple way to describe high-level structures of a software system.
If we look at the container level, we have a similar picture. From a container perspective, all I have is an API server, which is a Java web app storing information on a file system. We write code to create a couple of containers and wind them together using method calls and create some diagrams.
// containers Container apiApplication = structurizrApi.addContainer("API Application", "A simple implementation of the Structurizr API, which is designed to be run on-premises to support Structurizr's on-premises API feature.", "Java EE web application"); Container fileSystem = structurizrApi.addContainer("File System", "Stores workspace data.", ""); structurizr.uses(apiApplication, "Gets and puts workspaces using"); apiApplication.uses(fileSystem, "Stores information on"); // container view ContainerView containerView = views.createContainerView(structurizrApi, "Containers", "The containers that make up the Structurizr API."); containerView.addAllElements();
Essentially, you write code, and get pictures. This is very good for high-level stuff, however once you get down to components you don’t want to have to do this. Because of this, the Open Source Libraries have some component finders in them. The underlying question now shifts to “How do you find components.” The answer is simply that it is up to you, because every code base is different. If you follow an architecturally evident coding style and have a naming convention, you can find components based on this convention. If you use a framework like Spring for example, you can find Spring annotations and call them components etc.
This is the code I use to find the components in my API application.
// components ComponentFinder componentFinder = new ComponentFinder( apiApplication, "com.structurizr.onpremisesapi", new TypeBasedComponentFinderStrategy( new NameSuffixTypeMatcher("Servlet", "", "Java Servlet") ), new StructurizrAnnotationsComponentFinderStrategy(), new SourceCodeComponentFinderStrategy(new File("../src"), 150)); componentFinder.findComponents(); structurizr.uses( apiApplication.getComponentWithName("ApiServlet"), "Gets and puts workspaces using", "JSON/HTTPS"); // component view for the API Application container ComponentView componentView = views.createComponentView(apiApplication, "Components", "The components within the API Server."); componentView.addAllElements();
There are a few different strategies for finding components. I want to find items ending with the word “servlet,” and I want to find items that I’ve annotated with my own component annotation. After we find them, we wind them together. There is also some logic behind the scenes that finds the inter-component dependencies and creates a diagram.
The API servlet and the the workspace component have both been found and a relationship between them has been identified. In order to get the additional text into the diagram additional metadata must be added to your code base.
Visualization as a Model(27:11)
This whole process is really about creating a model. I want developers to get away from using diagrams and move back towards using modeling as an approach for describing software. Once you have a model, you can do lots of interesting things like generating diagram keys automatically. We can move away from horrible notation we don’t understand. We can hyperlink the model to the code so that if you find the diagrams, you can click the components and go straight to GitHub, showing you the exact implementation of the item in the diagram.
Diagrams as maps
We want to look at diagrams as maps of our architecture, like our analogy showed earlier. One problem to this approach is scale. When I used this tooling we’ve developed on a web app I created this was the result:
The diagram shows web app controllers and components, and the result is truly horrible. The code itself isn’t horrible, but yet the diagram is. Since this is a model however, you can not show everything, or you can only show the user a slice of the system. Perhaps a slice is starting from a web app controller or an entry point of your system, and show me the slice until your drop out the bottom of the app. You can essentially create a larger number of simpler pictures, allowing you to deal with scale. Once you have a model you can put it into lots of other types of tooling.
For example, if you’re a fan of Graphvis, the Java Open Source library is a Graphvis exporter that creates a DOC file that can be placed in Graphvis to auto-generate diagrams for you.
If you connect this whole idea to your build process, your documentation and your diagrams remain up to date as your code changes, which is ultimately the point I’m trying to make.
Documentation(29:13)
Many people are no longer documenting anything which probably sounds a bit extreme. We can thank the Agile Manifesto for the fall of documentation, because people misinterpret what it says about documentation.
If I dropped you into a project that is not familiar to you, and you’ve never seen the code base before, you would feel lost. You would have to start “zooming” around exploring to try to figure out where you are, which of course takes time. A you explore the code base, you’ll realize that the code doesn’t tell you everything you want to know especially with things like rational and intent. The reasoning behind decisions is often omitted from the code base. There is often lots of “tribal” knowledge, where teams have specialists and experts in particular parts of the base. This is all fine until one member gets run over by the proverbial London bus.
The Bus Factor
Imagine that you have a small team, and one member does get run over by a bus. Another member gets sabbatical for a year, and we have to fire someone else because they are useless. After all this, we have a much smaller team, and soon issues arise where a team member asks:
“You know that thing we have to run every week… … what is it..?”
Though this may seem like an extreme case, situations like this do happen. How do we fix our documentation problems? This is where the SAD comes into play. There are lot of templates out there for documentation, every consulting company I have ever worked at has created their own. These architecture documents usually include some insightful and interesting information such as how they arrived at their design, what some design decisions were, what the architecture is and how they look after it. In reality these documents tend to be horrible, with hundreds of pages, they’re out of date and just totally useless.
To fix this, naming turns out to be our friend. If we rename the document and call it a guidebook instead, all our problems go away. Like a tourist guidebook it includes maps to navigate the unfamiliar environment, itineraries, points of interest, the history, etc. For a software guidebook, maps are diagrams of the architecture, and show what the code looks like, what parts of the code base are important, and how the code base evolved to become what it is today. To make these documents more tolerable, my simple tip is to only include and describe what you can’t see from the code base. Essentially knock it up a level of abstraction and make things small. Avoid having hundreds of pages of things that simply become out of date and irrelevant. It is meant to be a living, breathing, evolving style of documents that changes with the code base, not an up front design. It is a supplementary piece of documentation that is meant to sit alongside the code base. It is a product related document, every software system should have a user guide essentially.
Documentation tooling
Many teams use Word or SharePoint. Lots of teams also use Conference. Another technique I’m seeing more teams use is MarkDown. They create documentation file and put them next to the source code in source code control. At build time the documentation performs a function such as generating HTML, uploading to websites and Wikis etc.
Something I want to do with Structurizr is to create a software architecture model that contains the model, the visualization, and the documentation. Here is some code I wrote to document my API application from earlier.
// documentation File documentationRoot = new File(“.”); Documentation documentation = workspace.getDocumentation(); documentation.addImages(documentationRoot); documentation.add(structurizrApi, Type.Context, Format.Markdown, new File(documentationRoot, "context.md")); documentation.add(structurizrApi, Type.Data, Format.Markdown, new File(documentationRoot, "data.md")); documentation.add(structurizrApi, Type.Containers, Format.Markdown, new File(documentationRoot, "containers.md")); documentation.add(apiApplication, Format.Markdown, new File(documentationRoot, "components.md")); documentation.add(structurizrApi, Type.DevelopmentEnvironment, Format.Markdown, new File(documentationRoot, "development-environment.md")); documentation.add(structurizrApi, Type.Deployment, Format.Markdown, new File(documentationRoot, "deployment.md")); documentation.add(structurizrApi, Type.Usage, Format.Markdown, new File(documentationRoot, "usage.md"));
It is several simple MarkDown files which you upload as part of the model, and some documentation is generated for you. I want to keep everything in one place so you can embed diagrams into your documentation.
There are lots of other tools out there for living documentation that are Open Source and on GitHub that can be used for creating documentation from code. For example, there is a German team that has a software architecture document they call arc42. It is a lightweight and lean approach to documenting software systems, and is very similar to my own approach.
Documentation length
Many people ask me how long a documentation should be. Asking how many pages is the wrong thing to ask. What we are really looking for is a document that can be read in one to two hours over a coffee or two. The idea is to get a good jump-off point into the code so I can explore the code base in a much more structured way.
To aid again in visualization, once you have a model for your software, you can create new things like a JavaScript D3 visualization of the static elements, like a tree structure. Here is a sample application model from the Spring Team, called Spring Pet Clinic.
This is the software systems container’s and components. You can find all the interesting component dependencies, both incoming and outgoing. You could rate our components based on size and complexity. To reiterate, once you have a model you can do a lot of different things with it.
For example you can place your model into Neo4j and query it or cypher. The software architecture model is just a directive graph. There is another whole tooling called jQAssistant that takes your source code, allows you to set some rules and puts it into Neo behind the scenes. Another tooling set created by Empear, runs your source code repositories and does both static analysis and super imposes the human aspects over it. For example, it can find items that are always changed by two different teams, and ask why that is. We could have the component boundaries incorrect in this instance.
Summary(36:42)
There’s a virtual panel about software architecture documentation from 2009. It says that we should be able to see the architecture in the code, we should be able to embed this information into the code, and be able to get the documentation form the click of a button. It is really all about automating as much of the documentation as possible.
As far as visualization, we need to remember to think of diagrams as maps of our architecture. Treat your diagrams as a set of maps to your software architecture, that describe your code based on different levels of abstraction. Any document you create should describe what your code base doesn’t. Diagrams should be more than manually drawn boxes and lines, so we need to stop using tools like Visio to represent our systems.
In closing, whenever you’re describing software, make sure you have a ubiquitous language within your team to do so.
About the content
This talk was delivered live in October 2016 at goto; London. The video was transcribed by Realm and is published here with the permission of the conference organizers. | https://academy.realm.io/posts/gotocph-simon-brown-visualize-document-explore-your-software-architecture/?utm_source=Swift_Developments&utm_medium=email&utm_campaign=Swift_Developments_Issue_75 | CC-MAIN-2022-27 | refinedweb | 5,676 | 53.81 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.