text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Yes, I know already what you’ve been thinking, but this post isn’t about programming languages. It’s about search.
We live today in an ocean of information. Too much information we can ever cope for. But, in some sense this information is strongly-typed: things belong to categories. And this “strong-typing” aspect enables us to approach search in an more effective way. That’s why when I saw for the first search engine (yahoo, if I remember correctly) I was troubled. That was not what I wanted to use to search stuff! Despite of this initial reaction, I began to be more and more accomodated with it. And I probably forgot what I wanted to see from a search engine, because after years of yahoo-style unstructured search, it’s now harder to think outside the box.
But I still feel that the right way to approach this problem should be radically different. There has to be a type for the items you want to be searched, or for the attributes that you use as a filter. Whatever this type is, I don’t know. At this point, I just suppose that is needs to be a certain way to categorize objects into classes.
One way to approach strong typing is to define a schema. This is the WinFS approach, where the schema defines the type. This approach might work perfectly for a few dozens of information classes. A hand-made schema can then be used to define different “search namespaces” for things like contacts, music files, documents, etc. And this approach might be just we need to structure all information from our desktop.
But this “manually-defined schema” approach won’t scale at the Internet size (if you ask me, it might not even work at the desktop size – just think about versioning). After all, we are dealing with obscene amounts of horribly unstructured, highly dynamic and sometimes wildly inconsistent data. How we are going to define a strongly-typed schema at the internet level? That’s not going to work.
So, what if this search engine discovers these categories on the fly? And what if this type is actively used during the search? For example, let’s assume that you start the search on the word “table”. The search page will display an initial results page as a courtesy, but more importantly it will ask you “Table as a furniture piece, or table as a dataset?”. You click on one of the two hyperlinks (let’s say on “table as a dataset”), and the search engine will give you a refined results page. Next, it will ask you about various variations on the idea – is this related to SQL ? Or Excel? etc. At each step, the search engine gives you various semantic categories of whatever you are searching for.
Note that the fact that a table can be either a dataset or a piece of furniture would be automatically discovered at the time the search indexes are built. At that time, when our search engine crawls the internet, it sees that 20% of the sites are reffering the word table as a subject in a furniture-related context. And 10% in the context of SQL.
The discussion really becomes interesting when you start thinking about how could you actually implement this stuff. But that’s another story.
I hope that the current search industry is still in early stages. And I won’t be surprised that ten years from now, everybody will see the evolution from weak-typed to strong-typed search as a natural one. In exactly the same way we see today the benefits of strong typing in modern programming languages…
please i’m asking about the MOM 2005 VS NetIQ Appmanager .
PingBack from
|
https://blogs.msdn.microsoft.com/adioltean/2005/01/17/weak-vs-strong-typing/
|
CC-MAIN-2017-13
|
refinedweb
| 631
| 72.16
|
{-# LANGUAGE Trustworthy #-} {-# LANGUAGE FlexibleInstances #-} {- |'). -} module Hails.Web.Router ( -- * Example -- $Example Routeable(..) , mkRouter -- * Route Monad , Route, RouteM(..) -- * Common Routes , routeAll, routeHost, routeTop, routeMethod , routePattern, routeName, routeVar ) where import Prelude hiding (pi) import LIO import LIO.DCLabel import qualified Data.ByteString as S import qualified Data.ByteString.Char8 as S8 import Data.Monoid import Data.Text (Text) import qualified Data.Text as T import Network.HTTP.Types import Hails.HttpServer import Hails.Web.Responses -- | Route handler is a fucntion from the path info, request -- configuration, and labeled request to a response. type RouteHandler = [Text] -- ^ Path info -> RequestConfig -- ^ Request configuration -> DCLabeled Request -- ^ Labeled request -> DC (Maybe Response) {- | 'Routeable' types can be converted into a route function using 'runRoute'. If the route is matched it returns a 'Response', otherwise 'Nothing'. In general, 'Routeable's are data-dependant (on the 'Request'), but don't have to be. For example, 'Application' is an instance of 'Routeable' that always returns a 'Response': @ instance Routeable Application where runRoute app req = app req >>= return . Just @ -} class Routeable r where -- | Run a route runRoute :: r -> RouteHandler -- | Converts any 'Routeable' into an 'Application' that can be passed -- directly to a WAI server. mkRouter :: Routeable r => r -> Application mkRouter route conf lreq = do req <- liftLIO $ unlabel lreq let pi = pathInfo req mapp <- runRoute route pi conf lreq case mapp of Just resp -> return resp Nothing -> return notFound instance Routeable Application where runRoute app _ conf req = fmap Just $ app conf req instance Routeable Response where runRoute resp _ _ _ = return . Just $ resp {- | The 'RouteM' type is a basic instance of 'Routeable' that simply holds the routing function and an arbitrary additional data parameter. In most cases this paramter is simply '()', hence we have a synonym for @'RouteM' '()'@ called 'Route'.) => MVar Int -> r1 -> r2 -> Route routeEveryOther counter r1 r2 = Route func () where func req = do i <- liftIO . modifyMVar $ \i -> let i' = i+1 in return (i', i') if i `mod` 2 == 0 then runRoute r1 req else runRoute r2 req @ -} data RouteM a = Route RouteHandler a -- | Synonym for 'RouteM', the common case where the data parameter is '()'. type Route = RouteM () -- | Create a route given the route handler. mroute :: RouteHandler -> Route mroute handler = Route handler () instance Monad RouteM where return a = Route (const . const . const $ return Nothing) a (Route rtA valA) >>= fn = let (Route rtB valB) = fn valA in Route (\pi conf req -> do resA <- rtA pi conf req case resA of Nothing -> rtB pi conf req Just _ -> return resA) valB instance Monoid Route where mempty = mroute $ const . const . const $ return Nothing mappend (Route a _) (Route b _) = mroute $ \pi conf req -> do c <- a pi conf req case c of Nothing -> b pi conf req Just _ -> return c instance Routeable (RouteM a) where runRoute (Route rtr _) pi conf req = rtr pi conf req -- | A route that always matches (useful for converting a 'Routeable' into a -- 'Route'). routeAll :: Routeable r => r -> Route routeAll = mroute . runRoute -- | Matches on the hostname from the 'Request'. The route only successeds on -- exact matches. routeHost :: Routeable r => S.ByteString -> r -> Route routeHost host route = mroute $ \pi conf lreq -> do req <- unlabel lreq if host == serverName req then runRoute route pi conf lreq else return Nothing -- | Matches if the path is empty. Note that this route checks that 'pathInfo' -- is empty, so it works as expected when nested under namespaces or other -- routes that pop the 'pathInfo' list. routeTop :: Routeable r => r -> Route routeTop route = mroute $ \pi conf lreq -> do if null pi || (T.null . head $ pi) then runRoute route pi conf lreq else return Nothing -- | Matches on the HTTP request method (e.g. 'GET', 'POST', 'PUT') routeMethod :: Routeable r => StdMethod -> r -> Route routeMethod method route = mroute $ \pi conf lreq -> do req <- unlabel lreq if renderStdMethod method == requestMethod req then runRoute route pi conf lreq else return Nothing -- | Routes the given URL pattern. Patterns can include -- directories as well as variable patterns (prefixed with @:@) to be added -- to 'queryString' (see 'routeVar') -- -- * \/posts\/:id -- -- * \/posts\/:id\/new -- -- * \/:date\/posts\/:category\/new -- routePattern :: Routeable r => S.ByteString -> r -> Route routePattern pattern route = let patternParts = map T.unpack $ decodePathSegments pattern in foldr mkRoute (routeTop route) patternParts where mkRoute (':':varName) = routeVar (S8.pack varName) mkRoute varName = routeName (S8.pack varName) -- | Matches if the first directory in the path matches the given 'ByteString' routeName :: Routeable r => S.ByteString -> r -> Route routeName name route = mroute $ \pi conf lreq -> do if (not . null $ pi) && S8.unpack name == (T.unpack . head $ pi) then runRoute route (tail pi) conf lreq else return Nothing -- | Always matches if there is at least one directory in 'pathInfo' but and -- adds a parameter to 'queryString' where the key is the supplied -- variable name and the value is the directory consumed from the path. routeVar :: Routeable r => S.ByteString -> r -> Route routeVar varName route = mroute $ \pi conf lreq -> do if null pi then return Nothing else do lreqNext <- liftLIO $ lFmap lreq $ \req -> let varVal = S8.pack . T.unpack . head $ pi in req {queryString = (varName, Just varVal):(queryString req)} runRoute route (tail pi) conf lreqNext {- $Example #example# The most basic 'Routeable' types are 'Application' and 'Response'. Reaching either of these types marks a termination in the routing lookup. This module exposes a monadic type 'Route' which makes it easy to create routing logic in a DSL-like fashion. 'Route's?\" @ -}
|
http://hackage.haskell.org/package/hails-0.9.2.0/docs/src/Hails-Web-Router.html
|
CC-MAIN-2015-11
|
refinedweb
| 880
| 62.88
|
Revision history for Treex-PML 2.11 Sun Mar 3 22:09:16 CET 2013 bugfixes: do not resolve symlinks 2.09 Tue Oct 25 11:02:29 CEST 2011 bugfix: quote ']]>' 2.06 Tue Mar 8 15:50:38 2011 bugfix: correctly test $@ when trying backends 2.03 Fri Apr 23 08:56:17 2010 Fixed a bug in relative URL resolving revealed on MSWin32 platform Minor fixes: build, compatibility, documentation, nest Fs2csts and Csts2fs into Treex::PML namespace properly 2.01 Mon Apr 19 16:15:35 2010 First version released on an unsuspecting world (ported from a library originally called Fslib and used internally in the tree editor TrEd).
|
https://metacpan.org/changes/distribution/Treex-PML
|
CC-MAIN-2015-40
|
refinedweb
| 112
| 55.44
|
Spawner gets blocked when trying to use libhector_gazebo_ros_sonar
Hi guys! To get to the point, I have an .xacro file where I am creating a simple 3-wheeled robot. I am trying to include a camera sensor that works fine and 3 sonar sensors using libhector_gazebo_ros_sonar. When I include the sonars the launching process stops at:
[INFO] [1495446798.750176, 0.000000]: Calling service /gazebo/spawn_urdf_model [ INFO] [1495446798.857894558, 0.022000000]: waitForService: Service [/gazebo/set_physics_properties] is now available. libGL error: failed to create drawable
If I don't kill the process it fills up my RAM and everything freezes
After I kill the process with CTRL+C i see this warning:
[WARN] [1495446813.634902, 0.032000]: Controller Spawner couldn't find the expected controller_manager ROS interface.
Code for sonar:
<xacro:macro <link name="sonar${sid}"> <visual> <origin rpy="0 0 ${rot}" xyz="0 0 0"/> <geometry> <box size="${dx} ${dy} ${dz}"/> </geometry> <material name="red"/> </visual> </link> <joint name="sonar${sid}_joint" type="fixed"> <parent link="${plink}"/> <child link="sonar${sid}"/> <origin rpy="0 0 0" xyz="${cos(rot)*jr} ${sin(rot)*jr} ${jh}"/> </joint> <gazebo reference="sonar${sid}"> <material>Gazebo/Red</material> <sensor name="sonar_sensor${sid}" type="ray"> <always_on> true</always_on> <update_rate>30.0</update_rate> <ray> <scan> <horizontal> <samples>${ray_count}</samples> <resolution>1</resolution> <min_angle>-${field_of_view/2}</min_angle> <max_angle> ${field_of_view/2}</max_angle> </horizontal> <vertical> <samples>${ray_count}</samples> <resolution>1</resolution> <min_angle>-${field_of_view/2}</min_angle> <max_angle> ${field_of_view/2}</max_angle> </vertical> </scan> <range> <min>${min_range}</min> <max>${max_range}</max> <resolution>0.01</resolution> </range> </ray> <plugin name="sonar${sid}_controller" filename="libhector_gazebo_ros_sonar.so"> <gaussianNoise>0.005</gaussianNoise> <topicName>sonar</topicName> <frameId>sonar_link</frameId> </plugin> </sensor> </gazebo> </xacro:macro>
I include the sonar like this:
<xacro:sonar_sensor
No parsing error when parsing directly the .xacro file to .urdf or .sdf. There may be a problem with the way I try to use the library or the namespaces, but I am not configuring those, just / like so:
<gazebo> <plugin name="gazebo_ros_control" filename="libgazebo_ros_control.so"> <robotNamespace>/</robotNamespace> </plugin> </gazebo>
I include the spawner in the .launch file like so:
<node name="robot_controller_spawner" pkg="controller_manager" type="spawner" args="[joint controllers here]"/>
system: Ubuntu 16.04, ROS kinetic, Gazebo 7 llibraries: libhector_gazebo_ros_sonar and ros-kinetic-gazebo-ros-control installed with sudo apt-get No path problem for ROS. The Gazebo path returns nothing but I set that up and still does not work. Again, everything spawns without the sonars.
Not sure if this is the same problem.
Thank you in advance for any suggestions. I asked also on the Gazebo forum here.
Thank you in advance for any suggestions.
UPDATE 1 Managed to narrow it down to the ray tag. It spawns when I comment out the vertical caracteristics but still have the warning:
(more)(more)
[WARN] [1495446813.634902, 0.032000]: Controller Spawner couldn't find the expected controller_manager ROS interface ...
this is at least suspicious. Are you running this in a VM, or a bare-metal installation? Do you have the appropriate video card drivers installed for your hardware? The OSS ones typically don't cut it.
For this part you are right. The drivers are just the standard ones. I run it on a laptop. But for now this is not such a big of a problem. The error appears but the rest works. For example there has been a while since I didn't get the ram filling problem but I don't know exactly what was the fix
It is possible that Gazebo uses OpenGL to accelerate simulation of certain sensors. If the sensor you want to use is one of those, it could be that not having proper drivers installed could have an effect on Gazebo.
Hello SorinV, i had the same problem and fixed it with your suggestions! But it's not able to recognize obstacles. Did you have the same problem? Thank you
Hello moshimojo. I ended up using this plugin: libgazebo_ros_range.so
ok, i'll try it thank you!
|
https://answers.ros.org/question/262258/spawner-gets-blocked-when-trying-to-use-libhector_gazebo_ros_sonar/
|
CC-MAIN-2019-51
|
refinedweb
| 660
| 58.79
|
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "shit"
-
- Why the Fuck would someone disable pasting on a password field!!!! How the fuck am I supposed to enter my shit from my password manager now
- !
-
- >Firefox randomly freezes
>Opens taskmanager
>spots Cortana running but "suspended"
>kills Cortana
>Firefox unfreezes
DAFAK MICROSOFT, GO FUCK OFF WITH YOUR FUCKING USELESS PIECE OF SHIT CORTANA22
-
- Start working on ticket
Looks at code
WTF is the shit?
Open devRant to rant
1.5 hours later
what was I doing?3
-
- Fucking Jira!
You fucking piece of fucking shit. You're about as useful as a nacho cheese enema.
Fuck Atlassian, fuck Jira, fuck fuck fuck fuck!15
-
- Random profesional behavior reminder:
The fact that you're obviously from a minority doesn't automatically make you non-racist. Specially, it doesn't give you immunity for being racist towards other minorities.
Just saying! Be racist outside of office hours, if you can. Or just... Keep that shit to yourself. 💩41
-
-
- wE oNlY uSe 10% oF bRaIn pOwEr
yeah, no shit, now go to your computer and launch 400 chrome tabs. Congrats, now you're using 100% of your computer performance. Does it work well now? Quick and snappy innit?
fucking muppets
- If I hear the word "techie" one more time from management, I'm gone lose my shit!!!
I don't go around calling them "managies" all day!!!15
-21
-
- Got a complaint from users of my project. It’s finally happened: people give a shit to tell me I’m doing a bad job. Let’s fucking gooooo3
-
-
- "Update on the last meeting, we'll either release what we've got or do something different"
WELL NO SHIT! 🤦♂️5
-
-
- Nothing is working today. 😭😭😭😭
My perfect results of Friday have all disappeared.
It now seems like everything was running on magic on Friday. 😭😭6
- Errrr.... Guys, how do we do weekends again?
I woke up worrying about a bijillion shit, exactly at 6 AM. I think I might be forgotten how to do weekends... 😕16
-
- Today I found out when a user logs in in this piece of crap: 59 calls to the api just to get user permissions 😤
I'm done...8
- Client company will only move to the cloud once they learn their competitor has, in fact, moved to the cloud. Way to compete guys.
You ain't shit.2
- Performance Review: You’ve slowed down.
My head: I never get to code cause all we do is meetings. Like no shit I have slowed down. Why the fuck do we have 3 days just full of meetings that I don’t even contribute to ?! I’m not gonna do overtime for this shit
Me (actually): Ah, shucks! Guess we can work on that huh?35
-
- So yeah, I’m at the “Fuck this shit let’s go on a random ass solo motorcycle trip this weekend” stage of my programming career.5
- We do infra as a code, and one of my coworker worked on the project alone. Few months down the road, when shit hits the fan, he just message me this is not working.
First of all, I did not write that shit, and also I was never part of the conversation during the decision making. So when shit hits the fan what do you expect me to do? Do some black magic and fix it magically???1
- Nothing pisses me off more than a manager that keeps demanding shit but when you ask for support they deny it from you. well guess what? two can play this game, bitch.3
- Windows 10 is a fucking piece of shit. It's the worst software ever made. It's the worst program ever written in the entire computer history.28
- Shit went very well - I am really happy for my future
Hopefully, the bureaucracy will be faster for me :P
Details? Not yet5
-
- WTF is the point a
of auto-generated documentation. Some dude litterally thought it was a good idea to read the code and write the exact same shit differently. WTF IS THE POINT!?
Documentation takes work, sorry, stop being lazy.12
- Normal people dream about shit before they go to sleep.
I, on the other hand, try to formulate a hypothesis that I'm already stuck on for 3 business days, thereby giving myself anxiety, henceforth failure to sleep.
I'm just perfect! 👌🤦🏻♀️🤦🏻♀️29
- I ordered a reduction sieve for my espresso cooker from Amazon, but got this instead (Euro coin for size comparison). They'll resend the correct article, but WTF is this shit even?! Crazy crap!23
-
-
- Urgh, fucking excel!
Why the fuck can't you handle a few thousand calculations you dumb ass piece of shit.
I am this close to... fuck, it crashed. 🤦♂️
I fucking give up.
Time to strap this data to a DB instead.8
- !Dev
So today I got a gig for plumbing. So I did it anyway. Here are the picture of what I did. Now they can flush their shit down to the sewer. (Where most of my former bosses are)6
- I am conducting technical interviews for about 10 years now.
I swear to god, the applicants keep getting dumber and dumber.
Getting more and more ashamed to talk about data structures, design patters or even the most basic algorithms, everyone with a graduation badge from udemy is now a software engineer. Fuck this shit.17
- Here's what being in a rut is like:
You wake up to the alarm, you waste an hour or two in bed stalling browsing social shit. Finally got out of bed. You have a todo list. You ignore it. Get something to eat. Open Netflix or some brain numbing shit while having breakfast. A few hours go by, you're still watching Netflix and switching to browsing social shit in-between so your brain is numb as much as possible. It's lunch time, you're supposed to cook something, nah, I will order something. Oh, it's bedtime, let's make a todo list and go to bed and start over tomorrow...5
- Ok so our director decided to try out google work space
Plugs in our organizations domain and emails etc
trial then expires
we now cant access our emails
cant login
cant do shit5
- RETARD MASTER: So how did you feel about this sprint DEV?
*nothing is planned, new tickets added each day and old ones removed - inconsistent sprint*
DEV: Well, it’s a bit chaotic, but it’s understandable. I’m used to it. Nothing’s to blame here. Client can’t produce their end of the bargain on time.
*3 week later*
DEV MANAGER: So RETARD MASTER gave a feedback. He told me you insulted him.
DEV: Can I please die now? Not funny.12
- How the fuck am I going to make a fucking email signature appear the same everywhere when the client insists in using a piece of shit software called Outlook and I am a goddam backend developer.
I don't give a shit about spacing and color and stupid fucking fonts.
Thank for listening. Have a great day.15
-
-
- Have any JabbaScripters ever heard of backwards compatibility?
Nope. Because all the shit on NPM is written by 15-year olds who don't know how to code properly, not to say maintain their packages.
Fuck you.6
-
-
- Got a toxic mf transferred off the team. For months we knew nothing of what he was doing and every planning session he just kept saying that it's a waste of time to do this or that and instead just go past everyone to the source to get shit done, and completely ignored the development process we had to at least keep some sense of what people are doing. Also, spent months whining about stuff, instead of just accepting shit and moving on. If you don't want to get with the program, then go fucking do something else
- Incompetence of people around me drives me mad. I see a piece of shit code and I can’t stop myself from improving it.
Also better developers around me. I need to find out how they’re better and beat them
- Fuck, accidentally reported a rant because my hand was resting on the fucking report button.
Shit, and sorry to the comment's author!6
- Super trivial but who ships a laptop to a new employee with random software on that is clearly for their own preferences? I don't use classic shell, I don't like classic shell, and it hugely fucked with both my opinion of the new place (an IT company, ffs) and my estimation of the person who configured it. Do whatever shit you must on your own machine but get out of my way and let me use the fucking os without more pointless shit! I wouldn't do this to you, no matter how much I might love some obscure additional layer for primarily nostalgic reasons. Raging!7
- BGP went down during new year and I had to endure the noise while reviving a dead switch from chassis.
Because a fucking intern decided it was fine to try broadcast internal shit.2
- Yours truly is gonna get published again (probably, most likely) but has to get their shit together and write faster because deadlines.
Send good vibes plz. I've been lazy lately.4
-
-
- I think I used to rage more at complete idiots when I was younger because I had the time and energy to do so
Now I've seen it at least a million times and just don't give a shit anymore
-
-
- They're not selling water in bullet trains because of COVID. I'm very proud to save people's life by dehydrating myself.
Fuck that shit.19
- I don't wanna hear anyone dismissing college education, specially from people that can't do asymptotic analysis and have no clue what a pointer is. It's not fine. What do you think people spend 4+ years studying for? For this shit? There's a reason why a diploma has a weight, it's not just decoration.
I get it that the american educational system is fucked up and you guys have to pay a shit ton of money for it, but you can't just pretend it's worth nothing.
How diminishing it is to hear people shit on a life long struggle to get where i am today. I had to study a ton to get into college, and I'm still pouring my blood and mental health into my studies, only for some random to say that a youtube tutorial is worth the same.17
- v2.0.22 is now officially LIVE in GMT+2. Congrats and good job everybody!
Let's not fuck this one up, shall we? Let's keep our shit together this time.6
- Article 13 and 11 passed today. Fucking why! They ether have absolutely no brain or are sleeping on a shitload of money right now. No one wanted this. First net neutrality now fucking this. Fuck this shit.
-
- I came to know and use C++ for 10 years now and I've just seen this syntax:
for(int i{0}; i<5; i++) { }
WTF is this shit??10
-
- Anti malware Service Executable
Can the cunt who programmed this please witness the random carnage their piece of shit causes on my work desktop ?
Granted it’s windows but seriously…4
- Holy Fuck Shit!!!!!!
Being a developer
Being a technical assistant
Soon to be a father;
Shit
Not really prepared for this5
- Added a new feature. Saw no traffic. Removed it thinking no one liked it. Receive a bunch of angry emails.
I forgot to tag shit properly. FML.
- Microsoft is asshoe.
They tried to force update a Windows 10 Home machine to Windows 11. Nobody clicked okay on anything. It had icon in task bar indicating it would install it on reboot. I had to go into the update settings to click "no for now". Fuck you microsoft. Eat shit and die. Just leave my shit alone.11
- The good: use the hardware watchdog in your application control flow.
The bad: don't use a watchdog.
The ugly: trigger the watchdog from a fucking timer interrupt.4
-
-
- What I learned from devrant:
There is someone, somewhere, that will upvote the stupidest shit.
Which I find both terrifying and humorous. This is not a criticism nor a putdown. I find people fascinating. I also realize that my definition of "stupidest shit" is very subjective. This is definitely the most "fun" forum I have been on in a long time.4
- I can’t remember shit
My code editor helps me a ton!!
I have most documentation offline.
Ask me to do shit in a job interview without Google or any reference material then the joke is on 🤡2
- Half my Instagram likes are just me scrolling down and randomly stopping so it accidentally counts as a like.
Good luck trying to find what I've liked tho! Instagram protects that shit more than the nuclear codes.
Who the fuck developed this shit?36
- nuget is a steaming pile of human shit, fuck the entire microsoft ecosystem and fuck anybody who likes it, and fuck you, and fuck me8
- WHY DOES GOOGLE CHROME CACHE THIS SHIT AND WON'T LOAD IT AGAIN. I THOUGHT I DIDN'T FIX THE BUG BUT GOOGLE CHROME IS THE BUG. THIS FLYING FUCK9
- Don't you love when clients who don't know shit about your system tell you that they have you a solution to your bug.2
- You create a ui library, and in the docs only thing I see is words, how tf am I suppose to know what this shit looks like3
-
- Why !?! Why would you design your web page in such a way that as images load, I have to play a scrolling game to try and read the shit !?!13
- I'm seriously thinking of quitting my job because they use microsoft shit.
You would though that would run on Linux, because you know, it's 2022.
Think again36
- Going through some terrible shit in life atm.. Didn't open devRant for a while and sure as the sun the stream made me smile.. Thanks..3
- Macbook keyboard is shit
Especially european variant.
I want to see any person in the world that is braindead enough to think that making short left shift if favour of having abolutely useless button there is good.
This is the most stupid decision that could be ever made.
Not only the symbols used by that button are seldom used, they also are duplicated on they other keyboard keys. But shift is used all the time and must be big enough instead of that shit me or did older games have less bugs, more people who knew how code worked. more rigorous shit ig26
- I know you, youre out there somewhere, coding, feeling like shit, putting your best, listening to coldplay, in the server room, your basement ... I know you veryy well1
-
- SINGLE
RESPONSIBILITY
FUNCTIONS
HOLY GARBAGE UNTESTABLE DIFFICULT TO DEBUG JFC SHIT.
FUNCTIONS DO ONE THING AND DO ONE THING WELL. LEARN IT, LIVE IT, LOVE IT, OR JUMP INTO A LAKE
that is all.5
- After 10 fucking wasted hours Im still up trying to figure out how to configure the motherfucking IDE to debug the fucking hideous PHP shit fuck code. Fuck PHP right in the ass.16
- When you're reading a NLP paper that cites Aristotle, you know some serious shit is gonna go down.9
- How's Irene and other friends from Ukraine holding up?
Shit is about to get real it seems and it's scary when big nations enter the playing field
-
-
- Why the fuck they keep insisting a backend developer to do frontend shit?
No fucking shit I'm slow on this crap. And i told them several times it's not my forte.And they keep inisting to keeping a high standard blah blah blah
Ffs just hire a frontend already.
I'll find another job. I don't care at this point.6
- Why the fuck is the master almost constantly broken? And not even "some feature I'm working on doesn't work"-broken but "can't build this shit"-broken. What the fuck is the workflow here that it's apparently acceptable? I wasn't able to do SHIT today because of it. Almost whole fucking day wasted.3
-
-
-
-
- I hate demo day. Why can't anyone else demo this shit? Just because I'm the front-end dev on this team doesn't mean I'm the only one who can go through stories and click buttons on my screen damn it!5
- Confucius says: not everything that can be done tomorrow, should be done today.
Let's just say it was an extension of the shit show that occurred this week and sweep it under the rug 🤐
Happy fucking Friday!!!7
- First time using a computer:
Booting up some Mickey Mouse game from a floppy disk when I was 4 on my dads gateway 2000.
First time coding:
Writing html in dream weaver at 14.
Edit: holy shit dreamweaver still exists?4
- Still having problems with samba doing weird shit. Now looking through the logs and
Hm... *that* doesn't seem right 🤔
- Of course, the variable for fields should be called "flds", those 2 bytes saved will help us so much! For a small price of this shit being not really readable anymore. Is it "floods"? "fleeds"?6
-.
- Millions of shattered dreams. Possibly the worst emotional pain.
Haters might love this state of mine.4
- [NN]
Day3: the accuracy has gone to shit and continues to stay that way, despite me cleaning that damn data up.
Urghhhhhhhh
*bangs head against the wall, repeatedly*10
- I have a deep love hate relationship with TypeScript. What a marvelous piece of shit that compiles into an app with less bugs.4
- I would like to murder postgres and the awful requirements of this damn project... Plus, I practically didn't sleep more than a blink last night so either postgres fucks off and gets its shit together with its transaction handling shit, or imma about to stab a bitch! 🗡 ⚔️20
- Project idea: make a fucking neural network visualizer, that gets my fucking model and gives me a proper fancy fucking visualisation in jpeg. 😐
I'm angry cuz I have to make that shit manually rn, and shit ain't playing nice.6
- Need somebody to yell at me every evening so I get my shit together and draw something.
Practice makes perfect; rite?18
- Dev: Woah look at this code! I might be a genius!
Also dev a few months later: Woah WTH is this shit? Was I totally dumb or what
- Can people stop using Kubernetes and over engineering shit for services which get like, 10 users at most ? Thank you.20
- "Pay more attention to the house"
Oh, really?
I'm working here!
Why every non tech person acts like I'm doing no fucking shit all day?
These types of things makes me want open my own fucking office.5
- If you asked me two months ago I'd have said building and using a Barnes Hut tree with CUDA.
Today my answer is working on a fuzzer with LLVM without knowing shit about either C++ and compilers.
- i dont know sql, gotta look shit up and dont much of it really internalized
i may be now being assigned somebody else's task to do some sql shit
fucking kill me4
- The workday today was shit but my colleague just randomly dropped off some ice cream (:
God bless you Martin2
- FUCKING PIECE OF SHIT DOCKER LOSING ALL MY FUCKING DATA WHEN I JUST WANT TO RESET THE ROOT PASSWORD YOU PIECE OF SHIT CUNT!AAAAAAAAAAAARGHJ WHAT A SHITTY OBSCURE CONTAINERIZED PILE OF BEARSHIT15
- As we currently see a lot of codeless software platform, in the next 10 years there is gonna be high demand for people to extend these shitty apps into something proper, just like it happened with wordpress. There is a key missunderstanding that writing code and developing an application are the same thing; they are not.
Once you can write code you sure ass aren't a developer, thats a grueling journey until then, and being able to create an application without code exaggerates the problem even further.2
-
- Fucking garbage piece of shit microsoft httpclient
identical request works in node!
identical request works in postman!
but noooooooo httpclient, you have to add the content length on the content itself, can't add authorization header except through special way, serialization is wrong bunch of shit pile of shit no working shit5
-
- If you're an ml engineer, you must know how to hyperopt. I could recommend keras tuner tho, it's nice and saves shit on the go.
- Can someone help me how to focus for 6 straight hours/day until the end of this month? Got 1 last exam left till i graduate with comp. science degree. I have to study databases but only theory. And i fucking hate reading text. I hate theory. I like solving problems analytically and theory is my weakness.
I read theory shit for a few mins and then distract myself with mobile games and tiktok for a few hours... I cant concentrate studying this shit...
How do i forcefully focus.
Can someone suggest me the best app that actually works to help me focus or something? Or some yt sound waves music?15
- I sometimes forget windows is absolute shit. Then I get to work with one and remember. Specially since microshit has actually banned my email (because I didn't give it my mew phone number it's sulking like a creepy stalker) and so I can't even properly log the fuck in into a machine I was using a few years back. 😐
If someone makes a windows rip off that could properly deal with .exe files, count me a customer. (in future tho. I haven't got money for shit rn)8
- Does anyone genuinely have manager(s) who give a shit about your career and growth and not simply the status of your JIRA ? :)
Would love to hear your thoughts :)6
- yet another big company who fuks everything up by locking accts without a fone they recognize as a fukin fone
wE CaNnOt uSe tHaT PhOnE NuMbEr
go fuk yourself
quit ruining systems by DEMANDING they are linked to something and then fuking everything not in some finite condition. "oh its for spam" - bs. "oh well there was unusual activity" - dont care you just lock shit out and that means support too you assholes. "wE CaNnOt uSe tHaT PhOnE NuMbErjUsT GeT A FoNe" - go fuk yourself you idiotic piece of shit.
sry, it's 2022, voip is a thing. go to hell3
- we're doing a massive database migration and trying to fix a lot of shit that was done for years in the db. problem is, i have to cater to a business major asshat that doesn't know the first thing about working with data and he's responsible for 99% of the shit we're trying to clean up.
his response to the problem i brought up in his stuff? "we can deal with it later, right now i need it like this". this is why you guys have a shit database, because we have to spoil this idiot. then they complain everything takes forever to run and the database is bloated and somehow it is our fault.
I'm really holding myself right now, because i already went off on him once and he basically called me hysterical, and our boss likes him too much to antagonize the bastard. but god i wish i could run over him11
-
- Multi-rant incoming!
1.
Stereotyping.
When did that shit become the norm?
2.
I'm lost in ROS smach. Does anyone do multi-threading in ROS services, or should I flip a table on smach? 😒11
- Nothing gets on my nerve than microsoft. Just another day being a victim of fucking microsoft trash product called teams. All I wanted to do was login but no, this ass of a product has it's own shit things.3
- someone once told me this and every time im in a slump and on the clock it at least makes me smirk
"i like shitting on company time, because they're paying me to shit"2
- Due to non work related shit I'm struggling to focus, I can still wrap my head around programming (even if with significant struggle) but I cannot keep up with cloud/containers/microservices/cool new tech of the day2
-
-
- When the project you're supposed to demo on monday is going blank on monday morning.
THIS SHIT WAS WORKING ON FRIDAY FFS6
- I know a lot of people disagree with modern art, but fuck me, at least we got away from this ugly shit.7
-
-
- Apple’s ecosystem is restricting, Windows 11 is shit and I don’t like Linux much.
Doomed, aren’t I?16
->>
-
- It’s been a loooong hiatus from devRant but I’m back and still no swipe to exit a rant. Cmon, that shit would make this app easier to navigate
- My phone has a useless Google Discover page on home screen that I can't use without singing in with Google and agreeing to them collecting data. 😐
So now I have a useless page on my home screen. (Really, somebody needs to make a layering over app for this shit that just gets my preferred feed and fill it here to makes use of this space. )11
- Don't you love when the designer can do their pretty shit without consulting you and then the client ask you why the shit the designer made looks so different.1
- OK my salary as medical doctor went into shit, Im not joking im in ASEAN 3rd world country
SWITCHING TO DATA ENGINEER
wish me luck19
- Screw OpenAI gym!
This piece of shit, frozen lake, doesn't even work properly!
Gaaàaaaæaaáaaãaaaaâaaåaaäaāaaah!3
- always put timeouts on your connection code..number of times I see shit blocked forever ..and we know it’s not good to have your shit blocked5
-
-
- Started the day having a career crisis where I feel worthless and all I have been doing is some worthless web shit that humanity never needed and most certainly, never will.
Good day!3
-.
- CORS is shit
Stupid useless shit that protects from nothing. It is harmful mechanism that does nothing but randomly blocks browser from accessing resources - nothing more.
Main idea of CORS is that if server does not send proper header to OPTIONS request, browser will block other requests to that server.
What does stupid cocksuckers that invented CORS, think their retarded shit can protect from?
- If server is malicious, it will send any header required to let you access it.
- If client has malicious intents - he will never use your shit browser to make requests, he will use curl or any ther tool available. Also if server security bases on something as unreliable as http headers it sends to the client - its a shit server, and CORS will not save it.
Can anyone give REAL examples when CORS can really protect from anything?32
- I just took the fattest shit imaginable. Its so huge and thick my asshole hurts. Almost the girth of my arm as if i ate a deer11
-
- "Abstract all the things!"
Theres so much fucking abstraction and so many different ways to abstract shit that I don't fucking know how anything works anymore.
Fuck this5
-
- I have stopped Windows updates in my laptop. But no windows has to do some shit in order to maintain its reputation. Every 10 min I get this pop up.9
-
- After a wildly productive day yesterday, I've spent most of today staring at my screen, going down mental rabbit holes. Sod's law!
It's not helped by an accidental all-nighter last night - I'm too old for that shit these days1
- Working at a different company for a few weeks before getting back to my usual work.
I'm using everything I hate: ReactJS, factories, style through JS, Jira, Teams, TDD...
The only good thing is I'm using TypeScript...1
-
- polymorphic relationships are fucking stupid. it's a great way to make your shit more complex and less maintainable for no benefit other than having 1 less table (which also makes queries slower since they can't be optimized correctly).3
- all these fukin job postings...
remote - ya k, great
then the benefits go on to list shit like snacks, cafe or lunch room, on-site gym...
wha?...who? how?...are you ... =|3
- fuk c macros :)
why am i doing this? for the potentially easier addition of some shit in the future?2
-
-
- Worst: Not landing a decent job
Best: dev-wise, none really.
It has been a rather shit year, really.15
- Webflow is the worst pieces of shit software I have ever used. Ffs why the fuck people use this crap4
- Real conversation with my shit bank
Me: Hey, I want to change the phone number associated with my card because I no longer have access to it. (aka stolen). I can't find the option to do so on your website anymore.
Them: Yeah, for security reasons you now have to come down to the bank (which involves standing in line for anywhere between 40 minutes and 2 hours) to do that simple change.
The actual fuck.12
-
- Fuck it, go ruin your own life, I don't deserve this shit.
I don't deserve getting treated like shit by my mother for financially helping her, in fact, bringing all the fucking money in this household to the table.
I don't deserve being gaslighted by some hypocrite who victimized or egos themself up to fit their narrative.
Just ruin your own day, but keep me out of it. I'm tired of playing mental support just to be shit on.2
- Mad how Zuck can instantly stop everyone talking about all the creepy, weird shit by simply renaming the internet. Prince Andrew missed a trick there.2
-
- well shit. i get back from vacation to find out that facebook disabled our fb app for some violation i forgot to fix before leaving. fuckfuckfuck.
-
- I just cannot hold myself laughing at the most serious moments in meetings. People say the funnies/stupidest shit and they go unnoticed!
-
- "Really... it is 2006 right? Email's been around for ages, so why are email clients still so crappy?"
XPXPXPXP, oh brother. shit just dont change do it?
- Fuck this shit
I’m interning at this place and the code is ALL OVER THE PLACE. I have to rewrite every damn function and the code base is so obfuscated and stupid on multiple levels. I’m sick of this shit and literally every damn thing needs to be rewritten from scratch2
- Posted on Twitter
A thread of Rittenhouse facts
I learned watching trial
So many spread lies
I couldn’t keep my silence
RIP follower count35
-
- Hot take:
Want to bloat the shit out of your product and want to lose the core USP?
Want to stop solving the problem you set out to address?
HIRE A PRODUCT MANAGER.
PMs are expert at shit show. We fuck up everything.
God I hate my profession.
When I start my company, the mandatory rule for every PM will be that if you want to add or enhance a feature, they will have to remove a feature. Whoever fails to do so will be punished by having to clean up the code base and work in sales for a quarter.6
- Why would she contact me for a job that needs ASAP attention and not show up after I gave her my salary expectation, I bet she is expecting a $1-$2 salary expectation from me, crazy recruiters I wonder if there's a university for that shit,
Hi guys, another recruiter shit!!!!1
-
- If I have to write one more uber-complex, goddamn Google Optimize test, I will literally piss and shit and throw up on my computer and then throw it out the window.1
-
- man. my job situation just keeps on dragging and getting more 'interesting' lets call it.
the nonsense and either complete incompetence or malcontent...just astounding. basically was told to undo/reenable something someone requested even though shit is still broke and they wont do anything about it even though theyre the only one who can.
i really need to get my head around w/e is stopping me from doing shit on my own and making a bill to quit this drudgery
- i have always thought about buying a 1 way ticket somewhere and leaving this shit.
right now seems like the best time to do that and actually benefit on all accounts from doing it7
-
- *just a normal day*
kiki: *,*::before,*::after { filter: blur(40em) }
OH SHIT!
*bass boosted CPU cooler intensifies*
*flossing dance*10
- I love python... but Holy shit the way imports works is something I'll be ranting about for a looonng time5
- What's a no CS degree, zero experience, experienced a short boot camp, was hoping this would be his break and spent a shit ton of money on it, person gotta do to get a developer job?12
- FUCK THIS SHIT. A fucking maid is paid like idk, $30/hr nowadays. Why the fuck are we eating so much shit for.
I'm moving to some remote fucking place and I'll grow my food instead of constantly begging for raises because a fucking tomato costs $737284883827362294939
MAY CLIMATE CHANGE AND WW3 KILL Y'ALL MFS20
- This is not a 'developer' rant, but goddamn it I work remotely for a few weeks with kids on holiday and sometimes feel like there's no hope for them. I just had one clean up shit from the bathroom door, because apparently they don't know how to wipe their asses properly, and instead of learning from this mistakes they were more concerned about me interrupting their play time to make them clean the shit. Almost seven years old and such a fucking nimwit sometimes. Sometimes I feel like it'd be real helpful to just to spank them..4
-
-
- Anyone used "Showwcase"? Branded as a "social network for devs"
Was reading an article, seems a bit shit, but thoughts?
Article:...
Url:
-...
An article describing toxic environment in angular team. Read lots of angular shit issue thread. The problem might be worse.
Angular needs to die6
- we need you to fix this bug that cannot be reproduced
ugh so now i have to go figure out how to reproduce it? i dont think this shit is even priority4
-
- Was always wondering how different programmers perceive their IDEs: for front-enders it's simple lightweight cool-looking notepad when majority of back-enders uses heavy tank-looking shit like intellij or eclipse with guns, rifles and much more shit6
- Wondered what that checkmark on a youtube channel means. Googled it. The summarized official answer:3
- when a senior likes to kick up a fuss on PR reviews, you'll do whatever dumbass shit they want in most cases, assuming it works
but its a relief when a tech lead or architect tells them to fuck off
- how the fuck can I download fucking retard shit of doctl digital ocean on fking windows , i keep getting this retard shit fucking fuck just keep it simple u fuck shits. why do i need to fking copy lines of fuck to power she ll that fking isn't working fuck off
New-Item -ItemType Directory $env:ProgramFiles\doctl\
Move-Item -Path ~\doctl-1.70.0-windows-amd64\doctl.exe -Destination $env:ProgramFiles\doctl\
[Environment]::SetEnvironmentVariable(
"Path",
[Environment]::GetEnvironmentVariable("Path",
[EnvironmentVariableTarget]::Machine) + ";$env:ProgramFiles\doctl\",
[EnvironmentVariableTarget]::Machine)
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")5
-
-
- Lol, installing an Abp package literally breaks startup, you don't even have to use the package, just install it and the shit breaks.
God I hate this bloated piece of shit framework, can't wait to move this codebase to Go.
- You know that you life has gone to shit when you type "p" into your browser and the first thing that comes up is "postresql"3
- Some people try to talk by changing their accent to look cool.
And my mind every time is like - man stop that and try to change your mindset first.1
- Quick update on our partner's API that doesn't work (see previous rant).
They gave the wrong URL! Wow!! Well we have the new URL but
the production credentials don't work!!!2
-?3
- My father lost is password of is google account =_= TFA need phone number ... but the phone is lock ... cannot format the phone because of FRP ... technologie is so shit these time...11
- typescript is shit.
I have never seen such a stupid bug in other languages....
Apparently, there is no way to do type narrowing in a nested object without using enum12
- One thing I learned over my few work years is:
1.Never do anything for free.
2.Be irreplaceable however you can.
3.Most managers ain't shit so don't play nice and end up getting stomped.6
-
- Ok so these fucks call me back to talk about why their sites are down
1 went into maintenance mode and the other got fucking hacked and they want to pin that shit on me
Fuck these fuckers4
- !2
- How do I find all of the AWS resources' arn identifiers? I'm trying write shit in terraform but making granular IAM policies is a nightmare.4
- Fuck flutter fuckery
why tf arabic text keep fucking breaking and going to next line, wtf is this shit. I have spent soo much time on this fuckery.
fuck this fucker4
-
- Holly fucking crap
i feel like my brains will just start oozing out
my fucking system isn't loading data properly in the table from a db despite following tutorials exactly the same
FUCK THIS SHIT!!!!!!!!10
- I’m already put on “support” for days if not weeks now and i’m tired of it. Give me a concrete project instead of the usual bucket of shit please!
- People complain and say react is shit, but yet still every modern website I visit is built using react.9
- Every time I think it's gonna be different. "This time I'll make a clean repo!". Then the frustration hits and the first "fucking piece of shit" commit creeps in and you push it there. One more repo soiled..5
- is github becoming a giant piece of shit for anyone else? its like gambling every time i make a push. sometimes it finishes, sometimes it decides to hang and do nothing.2
- The bigger imminent threat right now seems to be nuclear war! :/
Can't believe how quickly we have arrived to this shit again....9
- Why the fuck do i have to pay Azure 2 grand to have what is basically thr equivalent of a $25 per month VPS? This shit is ridiculous.2
-
- I wonder why the css class inside the css file throws an angular compiler error at '{' ... shit, it was configured with sass, not scss 🤦
- When you're a week behind in school because shit broke in production every night this week 🥲 I wish I could lucid dream so I could have some sweet relief by having superpowers or some shit.
But no, instead I get to be Mr. Fix-it lmao
-
- Mongodb CEO and the developer who build this shit for brains interface should be tarred and feathered. Almost 90minutes in and I cannot connect to anything other than error codes. What in the actual fuck is your job other than to make it difficult for a "free tier" user to connect?
"connect ECONNREFUSED 127.0.0.1:27017"
Oh ok another 20 minutes of work and you give me a bland beige error code like "```TLS/SSL is disabled. If possible, enable TLS/SSL to avoid security vulnerabilities.```"... um ok how do I enable it for your site, your database or on my computer... oh wait you don't say shit do you?
So now I'm fully 81 minutes into this shit show and all I get for error codes are these really descriptive gems 'getaddrinfo ENOTFOUND cluster0.hudbd.mongodb 'dot' net` comes up if I choose `mongo` with "connection string scheme" above it or `bad auth : Authentication failed'7
-
- + this post if you believe web developers shouldn’t touch fucking SEO, im sick of this shit still after 12 yrs…wtffff, am i wrong ?????? Please help14
- -.- seriously. found 'neeva' which kind of salvaged boolean searching, but even that's shit now.
this "stuff" -notthat
dId yOu wAnT NoTtHaT ? here it fricking is, first page1
- Why the fuck are testers attending the daily, if their only contribution is "Well.. umm.. I like.. umm.. ran tests yesterday"
Well no shit monkey, I could've told you that!4
- When people write functions/methods with bodies smaller that the call header (-_- ).
Function calls are not free people! Just Inline that shit manually (or at least make sure the compiler does so)!
double degree_to_radians(double degree){
....return (degree / 180.0) * M_PI;
}8
- anything humanity invents to help people, after some time turns out it’s becoming tool to control and kill people so maybe stop inventing that shit to help us so we can live2
- Why building a library of React component should be this hard: I'm sick of Webpack, Babel, Typescript and all the shit which is duct taped together to build some damn widgets.4
- For those who have made a FE to BE transition: what is ur best advice on how to try new things and find a place where you can build cool ass shit?
-
- dude why there so many dum fuck in this industry like people who just graduated , and don't know shit about tech or anything but flex and complain about shit just can accept that they don't know any shit this many years fucking noobs nothing like why are they , this don't deserve jobs just make bug and always call for help like why cant they figure out their shit, its just cant just spoon feed they every time, like i dont know what happens to this people after or they just survives in it? just tired of getting my ass on whenever they suck dude they dont know how to commit git lol , but never accepts , i am not talking about one person its like a species now , they dont even try to learn even tho they get jobs for no way , itrsucks2
-
- dude don’t know shit abt me and it’s been 3 months yet i know sm abt him why can’t guys just put in some effort and stop caring abt girls body’s10
- Dumb asshole lays a half smoked joint near me
Got up threw that shit away
Not trying to get arrested
Meanwhile accomplice wanders out again
- Is there an application for tracking workflow steps? for eg “selecting terminal and then recording the commands and the order in which they were inputted” so that I don’t have to memorize all that shit3
- How bash does not support redirecting stderr to /dev/null when using read redirection inside command substitution is F*CKING ANNOYING.
x=$(</foo/non_exitence_file.txt 2>/dev/null)
Why do people still use this shit of a shell
- So this supposed dev ops job has devolved into first support.
This shit honestly doesn't even make sense to me
Time to start looking5
- Do you remember when Death Grips were popular, tons of memes and shit, but some time has passed and we all just… moved on?1
- I don't - I live and am ready to face the consequences if I'm ever found out not to be the best of myself everyday.
(That shit is reserved for family and friends - can't buy my smile)
- I have a HUGE diarrhea for several hours now. It wont go away. Every 30 minutes or so i have to take a big dump. And its always such a huge explosion of literal liquid instead of shit. Well its still shit but in a liquid form. Its like im pissing but shit. For the last couple of weeks im not eating right because of huge amount of stress wave. Im eating very lightweight food and in a small quantity while drinking water a lot. Could that be the reason or does it have something to do with covid i had last week? Either way help me get this explosive diarrhea out of me what should i do24
-
- Me, two days ago: Man, I should try to drink less caffeine, I felt pretty shit when I didn’t had coffee… :(
Me, at 2am: *adds 6 tubs of gfuel to cart*
*proceeds to checkout*4
- My boss, Business people call dev code monkey bec they act as money is shown, like monkey likes banana! Dances Which is kinda wired like what they think we are (felt like shit)3
- It just never works when you try to set up self hostet php apps never theyre all shit they never print any useful errors they just randomly pass wrong args they use fucking ioncube fuck everything6
- My friends, I'm enjoying my time off by doing tango and surf and stuff. On new year eve, Imma go do 4 days of tantra shit to become an amazing lover. Life is good. Have a good day.4
-
- He said there is new bunch of testing frameworks, better then the old ones. Cypress he said. Testcafé he said. Angular is not tied to protractor anymore he said.
Than he was torn to another project and Horus took over his tests. Guess what? The syntax is slightly differenr, but when I experienced you produce the very same problems as with every other framework: Bad selectors. Using sleep instead of expected conditions. Tests Interferon with each other. He is a newbie so ok he dies not know shit. But I have to repair this shit now and learn a new framework for that while very experienced with selenium. But at least we use the newest shit now. Fml.1
- watching this vid from a bit back, and i wana quote this from now on when being snide about ppl's shit code
- Now it's bitbucket and gitlab that are not answering.
I will get fired because I can't do my job because nothing's fucking working -_- When it's not team (M$ piece of shit) memory leaking, it's visual studio. When it's not visual studio, it's windows. Or WSL. Or Atlassian shit.
- Any suggestions for tutorials / tips on doing facial identification? I want to identify a face with a label. Most of the shit I can find is face detection which is not what I’m looking for1
- LiNuX derder
i mean, besides the desktop, browser and obligatory shit?
uh, lets go with wine/dxvk (which, you know, is totally NOT an emulator)
-
-
- everything is shit and getting worse. everyone is retarded and getting worse. (including me of course)
no reason to be trying.
peace in my soul.
finally.3
- Every day has repetition in it of course but why become animatronic dummies ? Literally same shit. Stop doing the same things because I mention that you're all weird mother fuckers that do the same shit ! If that is even the case ! People used to have differing conversations unless they were borinh mudbrick stacking shit covered peasants! At least they built up different behaviors god9
- what the fuck
am i supposed to do
that is not
fucking
going
to lead
to more of the same shit.
i need a fucking clue here.
or rather i don't
this HUMAN does.
- Everyone in the star wars universe must be a shit shot.
the mando armor has more kinks in it than any armor that ever existed.
the hands
the feet
the legs
the neck
lol etc
Top Tags
|
https://devrant.com/search?term=shit
|
CC-MAIN-2022-40
|
refinedweb
| 8,140
| 72.36
|
Haskell Quiz/Internal Rate of Return/Solution Dolio
From HaskellWiki
< Haskell Quiz | Internal Rate of Return(Difference between revisions)
Latest revision as of 13:41, 9 February 2008
My solution for this quiz uses the secant method, which is quite easy to implement.
import Data.Function import Numeric import System.Environment secant :: (Double -> Double) -> Double -> Double secant f delta = fst $ until err update (0,1) where update (x,y) = (x - (x - y)*(f x)/(f x - f y), x) err (x,y) = abs (x - y) < delta npv :: Double -> [Double] -> Double npv i = sum . zipWith (\t c -> c / (1 + i)**t) [0..] main = do (s:t) <- getArgs let sig = read s cs = map read t putStrLn . ($"") . showFFloat (Just sig) $ secant (flip npv cs) (0.1^sig)
The resulting program expects the first argument to be the number of digits to be displayed after the decimal point, while the rest are the yearly income. For instance:
./IRR 4 -100 30 35 40 45 0.1709
|
http://www.haskell.org/haskellwiki/index.php?title=Haskell_Quiz/Internal_Rate_of_Return/Solution_Dolio&diff=prev&oldid=18983
|
CC-MAIN-2014-23
|
refinedweb
| 163
| 61.26
|
02 June 2011 10:11 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The plant was shut unexpectedly on 18 April and it was initially expected to remain shut for about 45 days.
However, the shutdown was longer than expected as “core facilities, including a compressor and agitator”, were damaged in a fire, the source said.
The company exported surplus stocks of feedstock paraxylene (PX) as a result of the plant’s prolonged shutdown.
A PX cargo for the second half of June was earlier heard sold by Mitsubishi Chemical at $1,500/tonne (€1,050/tonne) CFR (cost & freight) Taiwan/or CMP (China Main Port) to a South Korean trader.
Meanwhile, the demand in
“The [plant's] shutdown has so far had little impact on the local PTA market because of the poor demand during the season,” an Indian polyester maker said.
Major PTA producer Indian Oil said on 26 May it was considering reducing the operating rates at its Panipat-based PTA plant because of poor downstream domestic demand.
($1 = €0.70)
Additional reporting
|
http://www.icis.com/Articles/2011/06/02/9465419/mitsubishi-chemical-plans-end-july-restart-for-pta-plant-in-haldia.html
|
CC-MAIN-2014-49
|
refinedweb
| 175
| 50.77
|
TL;DR – Functions are reusable blocks of code that allow you to pass specific input parameters to them. With Python functions, you can execute a set of code without having to re-enter it.
Contents
Why use functions in Python?
The main reason to use Python functions is to save time. You might need tens of lines of code to perform one or more tasks on a set of inputs. However, you can simplify the task by defining a function in Python that includes all of them.
Note: next time you need to execute that task, you can use the function and simply define your inputs rather than re-enter long code.
On top of that, functions are easily reusable. Code contained in functions can be transferred from one programmer to another. This eliminates the chance of losing lines of code or incorrectly entering the code underlying the function.
Tip: Python has thousands of built-in functions such as print(), abs(), and range(). Plus, modules and libraries act like packages of additional functions for you to use.
Defining functions in Python
To define a function in Python, use the
def command followed by the name of your new function:
def myFunction(): print("This is my new function")
To call or execute your function, simply enter its name followed by parentheses:
def myFunction(): print("This is my new function") myFunction()
Theory is great, but we recommend digging deeper!
Adding input parameters to functions
If you want to add parameters that can be passed to the function as inputs, you will add them when defining the function.
Name your input variables inside the parentheses, separated by commas, and reference them in the code underlying the function. For example:
def myInfo(name, age): print("My name is %s and age %d" % (name, age)) myInfo("Paul", 20)
Note: Python returns an error message if you pass inputs that are not supported by your function.
This Python function example requires you to specify inputs for both the month and day parameters. You can set default values for both parameters when defining the function:
def myInfo(name="Paul", age=20): print("My name is %s and age %d" % (name, age)) myInfo()
Now, if you call the
myBirthday() function without specifying any inputs, Python will use the default values of February and 10 for the
month and
day arguments.
You can specify default values for any number of parameters in a Python function. However, once one parameter has a default value defined, the Python function syntax requires that every next parameter in the function definition would also have a default value.
You can change the order in which parameters are entered when executing a function if you call them by their parameter names. As an example:
def myInfo(name="Paul", age=20): print("My name is %s and age %d" % (name, age)) myInfo(age=23, name="Joe")
Python functions: useful tips
- It is possible to define variable length functions in which the number of input parameters is undefined. To do so, define your function with an asterisk (
*) before the variable name and pass a list of values as inputs to your function.
- When defining function names and input parameter names, be careful that you do not overwrite built-in Python functions or functions imported from a module.
|
https://www.bitdegree.org/learn/python-function
|
CC-MAIN-2020-16
|
refinedweb
| 550
| 57.61
|
So I have a table in my Customers.db file that looks like this:
I wanted to make a function which returns the "rows" which has the id equal to the user input. The Table is also called 'Customers and all the columns are type TEXT.
def run_query(db, query, args=NONE):
con = sqlite3.connect(db)
cur = con.cursor()
if args is None:
cur.execute(query)
else:
cur.execute(query, args)
data = cur.fetchall()
cur.close()
con.close()
return data
def get_info_by_id(db, ide):
query = "SELECT * FROM Customers WHERE id = ?"
return run_query(db, query, ide)
get_info_by_id('Customers.db', '920')
The
args argument of
cur.execute needs to be an iterable so that they can be bound to multiple placeholders (
?) if necessary. Therefore you should always be passing a list or tuple.
get_info_by_id('Customers.db', ['920'])
When you pass a string it gets treated as an iterable of single-character strings, hence it says there were 3 bindings (3 characters in
'920').
|
https://codedump.io/share/BMSkIwkmtD5d/1/what-am-i-doing-wrong-in-my-query
|
CC-MAIN-2017-51
|
refinedweb
| 161
| 69.79
|
Subject: Re: [Unbound-users] named.cache & .conf setup best practices Date: Tue, May 28, 2013 at 10:45:01PM +1000 Quoting shmick at riseup.net (shmick at riseup.net): > will i be able to resolve gTLD's such as .satan (which cesidian can) > .africa (which namespace can) or .geek (which opennic can) ? Why would you want to? When nobody else can or even wants to? -- Måns Nilsson primary/secondary/besserwisser/machina MN-1334-RIPE +46 705 989668 This MUST be a good party -- My RIB CAGE is being painfully pressed up against someone's MARTINI!! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: <>
|
https://www.unbound.net/pipermail/unbound-users/2013-May/002924.html
|
CC-MAIN-2017-51
|
refinedweb
| 117
| 71.51
|
trying to understand and use an Arrayjandy48 Apr 6, 2012 5:43 PM
I would like to have a game where the player has three tries before the game stops or moves on to another level.
In this game an object jumps up with a mouseClick and if it doesn't hit it's target it falls where it crashes into a floor that uses hitTestObject.
This leads to a restartBtn. but I want that movieClip to remain on the stage, which has an animation that splatters. A new MovieClip is put on to the stage and the cycle starts over.
Before this stage of the game s over I want the various movieClip splatters to be visible on the stage.
I thought an Array would help me achieve this result but I'm not familiar with using them dynamically.
I'm hoping someone can give me some tips as to what might work.
I have temporarily separated this problem from the rest of the code as I'm hoping it will be clearer.
This is where I left off and when I click the button it seems to eliminate the previous movieClip and introduce the next one.
But it seems like I'm missing someting so I thought I would post it as it is probably a problem that comes up a lot in games. Thanks
import flash.display.MovieClip;
import flash.events.MouseEvent;
var movieArray:Array = new Array();
movieArray = ["Egg_A","Egg_B","Egg_C"];
movieArray[0] = new Egg;
movieArray[1] = new Egg_B;
movieArray[2] = new Egg_C;
var myMovieClip:MovieClip;
init();
function init()
{
for (var i:int = 0; i < movieArray.length; i++)
{
emptyMC.addChild(movieArray[i]);
}
}
mainBtn.addEventListener(MouseEvent.CLICK,changeEgg);
function changeEgg(evt:MouseEvent):void
{
for (var i:int = 0; i < movieArray.length; i++)
{
movieArray.splice(i,1);
}
init();
}
1. Re: trying to understand and use an ArrayNed Murphy Apr 6, 2012 6:57 PM (in response to jandy48)
I don't see where an array is going to make anything remain. Just having an instance created without removing it until you want it to go away is all you need.
I don't see much reason with what you are doing with that array either. First you assign a set of strings to it, then you replace those strings with instances of some Egg objects. Then you add all the eggs to the display at once in your init() function (not one at a time), or you remove them all from the array with your change Egg function... calling the init() function after emptying the array isn't going to yield much since the init() function uses the array.
2. Re: trying to understand and use an Arrayjandy48 Apr 7, 2012 3:49 AM (in response to Ned Murphy)
OK
So I might try placing three or more instances on the stage and changing thier visibility as I need to use them.
I'll keep playing with it but at least I'm getting more famiiar with Arrays.
Thanks
|
https://forums.adobe.com/thread/986436
|
CC-MAIN-2015-27
|
refinedweb
| 500
| 78.99
|
In this challenge, we are given a server which accepts encrypted commands and returns the resulting output. First we define our oracle
go(cmd).
import urllib2 def go(cmd): return urllib2.urlopen('' + cmd).read()
This simply return the status from the server. It is common for this kind of CTF challenges to use some block-cipher variant such as some of the AES modes.
The first guess I had was that AES-CBC was being used.
That would mean that if we try to flip some bit in a block somewhere in the middle of the ciphertext, the first decrypted part would remain intact, whilst the trailing blocks would get scrambled. Assume that we have four ciphertext blocks and the decryption is
.
Now, we flip a bit in so that we get
,
then we have
. (This is not true, thanks to hellman for pointing that out in the comments).
Turns out this is not the case. In fact, the error did only propagate one block and not further, i.e.,
. Having a look at the Wikipedia page, I found that this is how AES-CFB/(CBC) would behave (image from Wikipedia):
Since
, we can inject some data into the decrypted ciphertext! Assume that we want
. Then, we can set
, since then
. Embodying the above in Python, we might get something like
def xor(a, b): return ''.join(chr(ord(x) ^ ord(y)) for x, y in zip(a, b)) cmd = '8d40ab447609a876f9226ba5983275d1ad1b46575784725dc65216d1739776fdf8ac97a8d0de4b7dd17ee4a33f85e71d5065a02296783e6644d44208237de9175abed53a8d4dc4b5377ffa268ea1e9af5f1eca7bb9bfd93c799184c3e0546b3ad5e900e5045b729de2301d66c3c69327' response = ' to test multiple-block patterns' # the block we attack split_blocks = [cmd[i * 32: i * 32 + 32] for i in range(len(cmd) / 32)] block = 3 # this is somewhat arbitrary # get command and pad it with blank space append_cmd = ' some command' append_cmd = append_cmd + '\x20' * (16 - len(append_cmd)) new_block = xor(split_blocks[block].decode("hex"), response).encode('hex') new_block = xor(new_block.decode("hex"), append_cmd).encode('hex') split_blocks[block] = new_block cmd = ''.join(split_blocks) #print cmd print go(cmd)
We can verify that this works. Running the server, we get
This is a longer string th\x8a\r\xe4\xd9.\n\xde\x86\xb6\xbd*\xde\xf8X\x15I some command e-block patterns\n
OK, so the server accepts it. Nice. Can we exploit this? Obviously — yes. We can guess that the server does something like
echo "{input string}";
First, we break off the echo statement. Then we try to
cat the flag and comment out the rest. We can do this in one block! Here is how:
append_cmd = '\"; cat f* #'
Then, the server code becomes
echo "{partial + garbage}"; cat f* #{more string}";
The server gives the following response:
This is a longer string th:\xd7\xb1\xe8\xc2Q\xd7\xe8*\x02\xe8\xe8\x9c\xa6\xf71\n FLAG:a1cf81c5e0872a7e0a4aec2e8e9f74c3\n
Indeed, this is the flag. So, we are done!
|
https://grocid.net/category/math/
|
CC-MAIN-2017-17
|
refinedweb
| 457
| 74.08
|
03-26-2019
04:50 AM
Dear Experts
This is the scenario. Our client is a part of large organisation that comprises many companies. All users in this company are all using the same namespace for on-premise access. e.g. rootdomain.com
Users from our client and their parent organisation uses the following credentials to login to on-premise resources <username>@rootdomain.com. However our client do not have the control of the rootdomain.com and they will not be able to verify the ownership.
Now, they have procured Office 365 services [Power BI] and have a tenant say, client.onmicrosoft.com.
They are asking; if their users can use their existing on-premise credentials to authenticate against Azure AD. My understanding is that; it is not possible to do this without verifying the domain [rootdomain.com] and without using AAD connect
Am I correct?
If not, is there any way to authenticate to Azure AD using a third party authentication providers by using some apps in Azure?
Thanks in advance
View best response
03-26-2019
12:44 PM
The only way to use their on-premises credentials is to verify the domain, that includes any auth method that uses attributes other than the UPN as well. Perhaps they can verify a subdomain instead of the root domain?
03-26-2019
03:46 PM
@Vasil Michev Thanks.
|
https://techcommunity.microsoft.com/t5/Azure-Active-Directory/Authenticate-on-premise-users-without-verifying-the-Domain/m-p/389110
|
CC-MAIN-2019-51
|
refinedweb
| 229
| 64.81
|
Created on 2003-04-09 20:01 by sschwarzer, last changed 2007-11-26 23:13 by gvanrossum. This issue is now closed.
inspect.isclass(class_instance) fails if the
corresponding class uses a "wildcard" implementation of
__getattr__.
Example:
Python 2.2.2 (#1, Nov 13 2002, 22:53:57)
[GCC 2.95.4 20020320 [FreeBSD]] on freebsd4
Type "help", "copyright", "credits" or "license" for
more information.
>>> import inspect
>>> class X:
... def __getattr__(self, name):
... if name == 'foo':
... return 1
... if name == 'bar':
... return 2
... else:
... return "default"
...
>>> x = X()
>>> inspect.isclass(x)
1
The problematic expression in inspect.isclass is
hasattr(object, '__bases__') which returns a true value.
Logged In: YES
user_id=80475
Hmm. I'm not sure that counts as a bug. In an OO
language, it's a feature that objects can be made to look
like and be substituable for other types. In this case,
you've taught your object to be able to fake some classlike
behavior (having a __bases__ attribute).
OTOH, inspect could have a stronger test for classhood:
type(object) in (types.TypeType, types.ClassType)
Logged In: YES
user_id=383516
Hello Raymond, thanks for your reply. In fact, I'm also not
sure if it counts as a bug. I also suggested a patch (handle
__getattr__ requests for __bases__ with an AttributeError)
for for the SF project which causes/d the problem.
I think, if there's a better way to decide on "class-ness"
than now, the code in inspect should be changed.
Fortunately, it doesn't have to be backward-compatible,
because the module is always distributed with a certain
interpreter version.
Logged In: YES
user_id=89016
type(object) in (types.TypeType, types.ClassType)
won't work with custom metaclasses.
isinstance(object, (type, types.ClassType))
would be better.
Logged In: YES
user_id=80475
Ping, if this change is made, will isclass() still be able to
find extension classes?
The addition of the hasattr(object, '__bases__') was made
by you in ver 1.11 about two years ago.=383516
Hi Facundo
The problem still exists in both Python 2.3.4 and 2.4.
A possible test case is:
import inspect
import unittest
class TestIsclass(unittest.TestCase):
def test_instance_with_getattr(self):
class Cls:
def __getattr__(self, name):
return "not important"
obj = Cls()
# obj is not a class
self.failIf(inspect.isclass(obj))
Logged In: YES
user_id=752496
Don't know yet if it's a bug or not, but in Py2.4.1
inspect.isclass() is still returning True in these cases...
Logged In: YES
user_id=80475
Ping, do you have a few minutes to look at this one and make
sure its the right thing to do.
Logged In: YES
user_id=565450
Due to this bug, 'pydoc modulename' is not working.
pydoc tries to access __name__ attribute of classes,
so it raises attribute error. (actually it is not a class,
but an instance only).
So please increase the priority of this bug.
And this case is also not working (same issue):
class X:
__bases__ = ()
x = X()
Obviously Ping isn't listening, so waiting for him is not productive.
Looking at the issue more, I can't really see a bug in inspect -- it's
the class definitions that are broken. So closing as "rejected".
> Due to this bug, 'pydoc modulename' is not working.
Can you be more specific? I can't reproduce this.
> And this case is also not working (same issue):
>
> class X:
> __bases__ = ()
>
> x = X()
Again, this is just a malformed class.
|
http://bugs.python.org/issue718532
|
crawl-003
|
refinedweb
| 581
| 69.07
|
Visual Studio 2010 is almost out the door. Now is a good time to look at further features we want to consider for future releases.
Our job as language-designers is to make the language better for its users. Sometimes, like with LINQ and lambdas in VS2008, we start from our anticipation of industry trends and then lead with our own vision of how things ought to be done better. And sometimes we respond more directly to user suggestions, sometimes using the designs they suggest.
The wish-list
We have an outstanding wish-list of 80+ ideas that came from us ourselves, from the groups within Microsoft who build on top of VB, from the blogs of “Most Valued Professionals” (MVPs) like Bill McCarthy and Đonny and from many other users.
Req0: Don’t add to (or change) the language
Core1: Iterators
Core2: Inline and multi-line comments
Core3: Dynamic pseudo-type (scoped late binding)
Core4: Flexibility with implementing properties
Core5: Overloading on optional parameters
Core6: “VB-Core”
Core7: Unify late-binder with early-binder
Core8: Attribute targets
Core9: Readonly auto-properties
Core10: Namespace Global
Core11: An “XML pattern”
Power1: REPL and “vbscript.net”
Power2: Async and resumable methods
Power3: Shared methods inside interfaces
Power4: Custom property and event templates
Power5: Custom anonymous types
Power6: __CALLER_MEMBER__
Power7: Dictionary and list literals
Power10: Reduce verbosity through #light
Power11: Extension properties
Req1: Put the loop control variable inside the For loop
Req2: Null-propagating field access
Req3: Multiline strings
Req5: Unsafe and pointer support
Req6: better casting
Req7: Separate syntax for assignment := and comparison ==
Req8: Use [ ] for arrays
Req9: Allow CObj in constants and attributes
Req10: Shared variables in method bodies
Req11: International date literals
Req12: Select Cast on object identity and type
Req13: Catches in using blocks
Req14: Non-empty default partial methods
Req15: GetType for instances, methods, properties
Req16: Modules that don’t auto-import their contents
Req17: Extension classes
Req19: Allow statements to start with parentheses
Req20: Range expressions
Req21: Allow unambiguous types from ambiguous namespaces
… AND ANOTHER 40+ TO COME!
Please give feedback
Over the coming month I’ll blog about every item on the wish-list along with our evaluation of it. We have our own ideas about what are the priorities for the VB language. You’ll have your own ideas – please tell us.
- We want to hear feedback from everyone!
- Write with scenarios that you think are awkward to code at the moment, even if you don’t have ideas on how to fix them: it’s good for us to know what “pain points” in the language are encountered most frequently by users.
- If you have specific ideas for changes to the VB language, write in with those too.
- If you think that other people’s ideas are good, please write to say so. If you encounter their problem-scenarios frequently, please say so.
- SCENARIOS ARE KING. If there’s an idea that you’d like us to prioritize, the very best way to bolster its chances is with a scenario that you’ve faced where the idea would have helped.
The best way to give feedback is through comments on this blog so that other people can read and respond, or directly by email to me lwischik@microsoft.com.
Practicalities
We can’t do everything. Sometimes we can’t even do anything. Two of the Erics on the C# language team have explained how we make our decisions.
Eric Gunnerson’s “Minus 100 Points“. Complexity itself has a cost: therefore only add new language features if their benefits outweigh this cost.
Eric Lippert’s “Why doesn’t the language do X?“ Every language feature requires design, specification, implementation, testing and documentation: therefore only allocate our resources on the features that bring the most benefit.
One idea I’ve liked ever since Paul Vick posted about it was the option of lower-cased keywords:
I think the language would be more readable with fewer "Screaming Initial Caps". It seems like a fairly simple change, given that Paul said he was able to create "a bootleg of the compiler" to do it.
(Obviously on a team there could be source-control hassle if one person’s compiler makes the keywords all lower-case, and another’s re-upper-cases them. But it’s just another matter of coding style for the team to agree on. And can’t every source control system be set to ignore case anyway?)
How about it?
An idea for the VB compiler would be to provide some option that would show all compiler warnings, instead of capping the list at 100.
Currently, there is no way for us to estimate the effort that would be involved in fixing all of the warnings in our legacy code base.
Yeah, the <=100 warnings/errors annoyed me when I was porting VB6 projects, no way to know how much work was left.
Hey, when I click on Core5 FireFox shows me Core 6
C# yeild statement analogy in VB! Otherwise making lazy stuff work takes a lot of work 🙂
When will we be able to get rid of byval in all of our methods?
What Rob said, seconded. By now I think it’s safe to say that we all know that ByVal is the default, and it’s easier to see the few cases of ByRef if we don’t have all those ByVals cluttering up the screen. Can you ditch them?
I second Ivan. Generators are useful. Not necessarily with Yield, exactly like in C#. It may well be more interesting and fitting into VB to explore some other approaches to coroutines, as long as they aren’t less expressive.
A C# <–> VB converter as SharpDevelop does, both for a file and for a whole project.
Improve the debugger :
– on big projects, the step by step in debug mode is slow .. slow
– add Watchpoints as they exist in PDS Basic 7.1 (enter expression which will break when True)
For example, myValue = 10 and when myvalue=10 in any place, the code stops and starts in debug mode. You don’t need to put breakpoint on a specific line, you put the watchpoint for the whole code.
– in debug mode in step by step, we can’t see the output display (the form with the controls). Allways in PDS 7.1, it was possible with F4.
Something that seems to be missing here are two suggestions from Bill McCarthy, namely "NameOf" (similar to TypeOf, but returns a string instead of a type) &/or the ability to use an attribute like "<RaisesNotifyPropertyChanged>" on a property, either of which would help ENORMOUSLY in the implementing of INotifyPropertyChanged which has to be hand-coded AGAIN & AGAIN & AGAIN. The compiler could help out here as it has with many other tedious things, like auto properties & anaonymous types etc.
In next ver of VB pl enhance "OR" oprator to cut short on CODE. For example
Public Function IsCharIsVowel(ByVal C as Char) as Boolean
If C = "A" or C="E" or C="I" or C="O" or C="U" Then
Return True
Else
Return False
End if
End Function
'Suggestion to Improve to this syntax
Public Function IsCharIsVowel(ByVal C as Char) as Boolean
If C IN ( "A" ,"E" ,"I" ,"O" ,"U") Then
Return True
Else
Return False
End if
End Function
Function IsVowel(ByVal C As String) As Boolean
Return "AEIOU".Contains(C)
End Function
Function IsVowel(ByVal C As String) As Boolean
Select Case C
Case "A","E","I","O","U"
Return True
Case Else
Return False
End Select
End Function
|
https://blogs.msdn.microsoft.com/lucian/2010/01/28/a-wish-list-for-future-versions-of-vb/
|
CC-MAIN-2016-44
|
refinedweb
| 1,267
| 58.52
|
I tried to test very simple requester-reply communication model of Connext 5.2 Professional using Visual Studio 2010 in x86win32 env.
With following simple idl,
struct Request { short request_service;};
struct Reply{short reply_service;};
in generated C# example code, I wrote following,
Requester<Request, Reply> requester;
under participant creation code in publisher.cs.
However, in Visual studio development environment, it shows "Requester" is not defined. (can't find type or namespace)
With my environment, all of traditional pub-sub model is working well.
I just start to test request-reply mechanism, please let me have answer.
from Chumsu. South Korea
Hello,
The "undefined Requester" happens because you need to also include the DLL rticonnextmsgdotnet45.dll as a reference into your VisualStudio project. This library is under the "lib" directory in your RTI Connext installation. For example in my system is at: C:\Program Files\rti_connext_dds-5.2.0\lib\i86Win32VS2012\rticonnextmsgdotnet45.dll
To add the library as a reference in VisualStudio you need to use the SolutionExplorer. Under each Project you will see an entry called "References". Right-click on it to get a pull-up menu and select "Add reference..." (see figure below).
This will popup the "reference Manager" window. At the bottom you will see the "Browse..." button. Select that, find the rticonnextmsgdotnet45.dll library in your RTI Connext installation and add it. See below:
You will need to repeat this process for all the projects, that is both the one that you use for the Requester and the one you use for the Replier...
Gerardo
Hi, Gerado.
As you recommended I put rticonextmsgdotnet40d.dll into reference, and add code "using RTI.Connext.RequestReply using RTI.Connext.Infrastructure".
However, when I compile with code "Requester<.., ..> ...", compiler gives several errors(9 errors).
You can see these error list in attached image.
Sorry for Korean language output.
If it is possible, can I get some example project which works well with Request/Reply and Pub/Sub mechanism.
Regards,
Chumsu. zzumtei@gmail.com
South Korea.
I want to test client/server type comm. in DDS.
Please anyone let me know the way to buil application.
Chumsu.
|
https://community.rti.com/forum-topic/question-c-requester-class-not
|
CC-MAIN-2020-05
|
refinedweb
| 355
| 52.46
|
Abstract class with main method
Bennett Rainville
Greenhorn
Joined: Feb 16, 2007
Posts: 5
posted
May 16, 2007 15:38:00
0
Hello,
I am attempting to write a series of
java
classes, each of which will be executed from the command line. Each of these classes will have some shared functionality, and I am thinking of using an abstract class to hold this. However, I am doing something that seems to be pretty dumb, and I'm wondering if there is a better way of doing what I want to do. Here's a simplified version of my code:
public abstract class SuperClass { public void setup() { System.out.println("In setup"); } public void finish() { System.out.println("In finish"); } public abstract void operate(); public static void main( String[] args ) { SuperClass class = ( SuperClass ) Class.forName( args0] ).newInstance(); class.setup(); class.operate(); class.finish(); } } public class SubClassA extends SuperClass { public void operate() { System.out.println("In SubClassA"); } } public class SubClassB extends SuperClass { public void operate() { System.out.println("In SubClassB"); } }
As I would expect, if I run the main method on SubClassA with SubClassA as a paremeter, it prints out "In setup", "In SubClassA", and "In finish". However, there's got to be a way where I can get rid of the need to pass in this parameter, and in doing so remove the ugly Class.forName on line 1 of the main method. Any ideas?
Stan James
(instanceof Sidekick)
Ranch Hand
Joined: Jan 29, 2003
Posts: 8791
posted
May 16, 2007 18:37:00
0
I like your interest in pluggable implementations. One way to approach ugly code is to move it into another class and forget about it.
Seriously, you can move object creation to some kind of creational helper (avoiding the word Factory which has a specific meaning that I don't want to get into).
// current SuperClass class = ( SuperClass ) Class.forName( args0] ).newInstance(); // replace with SuperClass object = ObjectVendor.getObject( args[0] );
That isolates your logic from any creation issues, such as exceptions. You don't have to know if the Vendor (it's hard to make up new words that don't already have meaning here) uses Class.forName() or looks up the classname by a logical key, makes new instances every time or gives you existing instances from a pool.
I'd look to break your program up a bit, move main() to a new class that sets up some configuration, like which subclass to use. We could maybe make that Vendor fit the Factory Method
pattern
.
Any of that sound useful?
[ May
Bennett Rainville
Greenhorn
Joined: Feb 16, 2007
Posts: 5
posted
May 17, 2007 11:03:00
0
That's a good suggestion. I'll add something like that.
What I'd really like, though, is to move completely away from the Class.forName, with something like this:
public abstract class SuperClass { ... protected abstract static SuperClass getInstance(); protected abstract void operate(); protected final void setup() { System.out.println("In setup"); } protected final void finish() { System.out.println("In finish"); } public static void main( String[] args ) { SuperClass inst = getInstance(); inst.setup(); inst.operate(); inst.finish(); } }
public class SubClassA extends SuperClass { protected SuperClass getInstance() { return new SubClassA(); } protected void operate() { System.out.println("In SubClassA"); } }
Doing something like this would eliminate the need for the Class.forName in the main method. Note that the getInstance method must be static for it to be accessed in the main method. However, a method cannot be both static and abstract, so my getInstance() idea won't work as it is written. I have also experimented with making it not abstract (only static), and overriding it in SubClassA. However, even though I'm running main() on SubClassA, and SubClassA has a getInstance method, the getInstance method in SuperClass is being executed, not the one in SubClassA (in fact, I'm not even sure you can override static methods). Any thoughts?
[ May 17, 2007: Message edited by: Bennett Rainville ]
Garrett Rowe
Ranch Hand
Joined: Jan 17, 2006
Posts: 1296
posted
May 17, 2007 20:01:00
0
How about this, I removed the Class.forName(
String
) using Enum.valueOf. The only catch here is that the enum type must know about all subclasses at compile time (that means its nowhere near as flexible as Class.forName()), then you pick the appropriate enum at runtime. Doesn't make anything prettier or more elegant though, quite the contrary. But it was fun seeing if it would work.
abstract class SuperClass { protected abstract void operate(); protected final void setup() { System.out.println("In setup"); } protected final void finish() { System.out.println("In finish"); } public void doIt() { setup(); operate(); finish(); } } class SubClassA extends SuperClass { public void operate() { System.out.println("In SubClassA"); } } class SubClassB extends SuperClass { public void operate() { System.out.println("In SubClassB"); } } enum MyEnum { A { public SuperClass makeSuper() { return new SubClassA(); } }, B { public SuperClass makeSuper() { return new SubClassB(); } }; public abstract SuperClass makeSuper(); } public class EnumFactory { public static void main(String[] args) { MyEnum me = MyEnum.valueOf(args[0]); SuperClass sc = me.makeSuper(); sc.doIt(); } }
Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter
I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link:
subject: Abstract class with main method
Similar Threads
constructor initialize order
Unable to comprehend ......in Khalid Mughal
abstract class / class structure
��Question, Class��
Questions about casting
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/382684/java/java/Abstract-class-main-method
|
CC-MAIN-2015-32
|
refinedweb
| 931
| 54.93
|
Base class for highlight rules. More...
#include <highlightrule.h>
Base class for highlight rules.
This abstracts from the actual implementation for matching.
Creates a rule for the given element (Although each rule can concern more than one program element, we provide only this convenience constructor with only one name: if the rule concerns more than one element one can use addElem method)
Adds an element name to the list of this rule.
Implemented in srchilite::RegexHighlightRule.
Performs replacement of references in this rule.
Implemented in srchilite::RegexHighlightRule.
Try to match this rule against the passed string (implemented by calling the pure virtual function tryToMatch below).
The passed token is assumed to be reset (i.e., no existing matching information is stored in it when passing it to this method).
Try to match this rule against the passed string.
Implemented in srchilite::RegexHighlightRule.
|
http://www.gnu.org/software/src-highlite/api/classsrchilite_1_1HighlightRule.html
|
CC-MAIN-2017-09
|
refinedweb
| 142
| 68.36
|
Talk:Proposed features/culture
Taxonomy should not be inserted into the database
The whole idea of "namespaces" like amenity, tourism, culture etc. is actually taxonomy in disguise. Discussing whether a museum should be tagged as amenity=museum, tourism=museum or culture=museum does not change the basic fact that that particular POI is simply a "museum".
We could later argue whether museums are amenities, culture items or something else but this discussion should not belong to geographic data themselves
Let me be clear: I hate the current division between amenity= and tourism=. But we are not discussing that. We are discussing whether we should introduce a third, or fourth, "namespace". And I strongly oppose this.
For instance: are nightclub "culture" or "amenities"? We could argue that this depends on our own "culture": if I oppose nightclubs on religious grounds, I could very well say that they are not "cultural" at all! But I would still acknowledge that a particular POI is, in fact, a nightclub. FedericoCozzi 12:06, 8 November 2010 (UTC)
- While I agree with the general statement that "the OSM database needs not contain a taxonomy", I can very well accept the fact that — for mappers (i.e. I am not speaking of semantics here) — a better categorization of tags is helping newbies memorizing tags. And some forgetful individuals like myself, who keep forgetting if a "fire_hydrant" is an amenity or not. In some places it may even help stylesheets filtering out a certain category of POIs, but that's secondary. On the other hand, I don't currently see a point where a culture "namespace" would hurt the database. I can live with or without a change. Kay D 13:31, 8 November 2010 (UTC)
- I see your point, but be careful: if the categorization is not fully agreed upon, we will end with two different tags with slightly different semantics. Let's suppose that culture=nightclub get passed with a slim majority: some people could argue that we now have two different nightclubs, those that are "cultural" and those that aren't. Something similar almost happened with shop=ice_cream vs amenity=ice_cream: someone argues that there is a difference between the two, and that they have different semantics. If this happened to the "culture" key, you could forget your dream of "better categorization": you would now face two slightly different tags, for two slightly different POIs. FedericoCozzi 16:14, 22 December 2010 (UTC)
|
http://wiki.openstreetmap.org/wiki/Talk:Proposed_features/culture
|
CC-MAIN-2017-26
|
refinedweb
| 407
| 61.06
|
RE: Import external data - web query
- From: Vijay Kotian <VijayKotian@xxxxxxxxxxxxxxxxxxxxxxxxx>
- Date: Tue, 15 May 2007 05:43:00 -0700
Hi Challa,
Your reply for my query is very extensive, this is for importing a file from
database or other data range. I am looking out for importing a data from
internet site. At New Web Query (Import External Data module) after entering
web address the data (table) from site appears pretty late. The button to
import data appears at various other fields but not at table which i would
like to import. Without the button i am unable to import data. Is there any
other means to import the data from web query ?
Thank you.
"challa prabhu" wrote:
Hi,.
Refer to the following information:
Troubleshoot importing data
Creating data sources
The data source I want isn't listed in the Select Data Source dialog box.
If you can't find your data source (data source: A stored set of "source"
information used to connect to a database. A data source can include the name
and location of the database server, the name of the database driver, and
information that the database needs when you log on.), click New Source in
the Select Data Source dialog box, and then click Other/Advanced under What
kind of data source do you want to connect to in the Data Connection Wizard.
If you are still unable to find your data source, check with your system
administrator or the vendor that provides the database you want to access.
I can't create a new data source in Microsoft Query.
Check the server address and logon information Before you set up a data
source, make sure you know the address where the database is located on your
network and have the necessary permissions to connect to the database and log
on. See the administrator of your database for a logon name, password
(password: A sequence of characters needed to access computer systems, files,
and Internet services. Strong passwords combine uppercase and lowercase
letters, numbers, and symbols.), or any other permissions required, and to
make sure the access you've been granted is working properly.
Check your driver First, make sure you have the right ODBC driver (Open
Database Connectivity (ODBC) driver: A program file used to connect to a
particular database. Each database program, such as Access or dBASE, or
database management system, such as SQL Server, requires a different driver.)
or data source driver (data source driver: A program file used to connect to
a specific database. Each database program or management system requires a
different driver.) for your data source (data source: A stored set of
"source" information used to connect to a database. A data source can include
the name and location of the database server, the name of the database
driver, and information that the database needs when you log on.). ODBC
drivers and data source drivers allow you to connect to new databases as they
become available. However, you must make sure correct driver is installed for
the type of database you're using.
Make sure the driver works with Excel In addition to the drivers
provided with Microsoft Office, you can use ODBC and data source drivers
provided by third-party manufacturers. Before you try to use a third-party
driver, make sure the manufacturer has tested the driver with Microsoft
Excel. For some databases, the driver supplied with the database software may
be the best choice. Contact the administrator of your database to find out
what's available and what works best at your site.
Make sure the driver is properly installed
To display the list of available drivers, point to Import External Data on
the Data menu, and then click New Database Query.
Double-click New Data Source on the Databases or OLAP Cubes tab.
Type a name in step 1 of the Create New Data Source dialog box, and then
click the list in step 2. If you don't see the driver you need, you should
check to make sure the ODBC driver or data source driver is installed
properly.
Make sure you supplied all of the configuration information After you've
installed the driver and selected it in step 2 of the Create New Data Source
dialog box, make sure you provide all of the necessary information in step 3
of the dialog box. For information about a Microsoft driver, click Connect,
and then click Help in the setup dialog box for the driver. For third-party
drivers, see the Help system or the documentation for the driver.
If you are setting up a data source with an ODBC driver or data source
driver provided by Microsoft, click the name of your driver for information
about the settings you should make in step 3 of the Create New Data Source
dialog box.
My data source has an asterisk next to it.
The data source (data source: A stored set of "source" information used to
connect to a database. A data source can include the name and location of the
database server, the name of the database driver, and information that the
database needs when you log on.) is from a version of Microsoft Query earlier
than Query 97.
Data sources created in versions of Query earlier than Query 97 have a
different format from data sources in later versions of the product. These
data sources and queries (query: In Query or Access, a means of finding the
records that answer a particular question you ask about the data stored in a
database.) can still be used with later versions of the product, but queries
that are created by using these data sources cannot be shared with other
users. Versions earlier than Query 97 store data source information as part
of your Microsoft Windows operating system, and that information is available
only on your system.
Identify data sources created with earlier versions of Query
On the Data menu, point to Import External Data, and then click New Database
Query.
Click Options in the Choose Data Source dialog box, and then select the
Include registry DSNs in list of available databases check box.
All data sources that were created by using a version earlier than Query 97
appear in the Choose Data Source dialog box with asterisks next to their
names.
Delete old data sources After you identify data sources created with
earlier versions of Query, on the Databases tab in the Choose Data Source
dialog box, click the data source you want to remove, and then click Delete.
Create shared data sources If you want to share queries or report
templates (report template: An Excel template (.xlt file) that includes one
or more queries or PivotTable reports that are based on external data. When
you save a report template, Excel saves the query definition but doesn't
store the queried data in the template.) that use data from the external
databases that are specified in your non-shareable data sources, create new
data sources for these databases. Use the new data sources to create the
queries, query files, and report templates.
Importing data
A message indicates that the path to my database is not valid.
Check for a mapped network drive If your database is on a shared network
directory, when you set up the data source (data source: A stored set of
"source" information used to connect to a database. A data source can include
the name and location of the database server, the name of the database
driver, and information that the database needs when you log on.) and
selected the database file, the path to the database may have been recorded
in the data source with the mapped drive letter in use by your system at that
time. For example, if your database is named Inventory.mdb, and you had drive
G mapped to the shared network directory where this database is stored, your
data source might record this location as G:\public\Inventory.mdb. When you
try to use this data source, or you or other users try to run queries created
with this data source, the ODBC driver (Open Database Connectivity (ODBC)
driver: A program file used to connect to a particular database. Each
database program, such as Access or dBASE, or database management system,
such as SQL Server, requires a different driver.) displays a message that the
path is not valid if drive G is not mapped to the same shared network
directory.
Use an alternative to drive mapping If you are using the Microsoft Access
driver or the Microsoft Excel driver, you can correct this problem by
creating a new data source. When you specify the location of the database
file, don't select the mapped drive for the shared network directory.
Instead, type the UNC address (universal naming convention (UNC): A naming
convention for files that provides a machine-independent means of locating
the file. Rather than specifying a drive letter and path, a UNC name uses the
syntax \\server\share\path\filename.) of the shared network directory, and
then locate the database file. For example, if your database file is stored
on a server named Shared, you could type \\Shared\public and then select the
file Inventory.mdb.
Map the same network drive letter before using the data source For other
ODBC drivers, before you use a data source or run a query created with that
data source, make sure the same drive is mapped to the shared network
directory where the database is located as was mapped when the data source
was created.
Sorting and formatting are incorrect after I refresh an external data range.
Formatting changes in Query won't affect Excel Formatting that you apply
in Query affects the view of the result set (result set: The set of records
returned when you run a query. You can see the result set of a query in
Query, or you can return a result set to an Excel work*** for further
analysis.) only in Query. When you return the result set to Microsoft Excel,
formatting changes you made while in Query— such as hiding fields (field: A
category of information, such as last name or order amount, that is stored in
a table. When Query displays a result set in its Data pane, a field is
represented as a column.) or changing the width of a column, the height of
rows, or the font, style, or size of text— are not displayed in Excel.
Preserve Excel formatting when you refresh Each time you refresh
(refresh: To update data from an external data source. Each time you refresh
data, you see the most recent version of the information in the database,
including any changes that were made to the data.) an external data range
(external data range: A range of data that is brought into a work*** but
that originates outside of Excel, such as in a database or text file. In
Excel, you can format the data or use it in calculations as you would any
other data.) Excel replaces the existing data with new data and also removes
any Excel outlining and subtotals. You can preserve formatting, but not row
sorting or outlining, for an external data range by clicking Data Range
Properties on the External Data toolbar (toolbar: A bar with buttons and
options that you use to carry out commands. To display a toolbar, click
Customize on the Tools menu, and then click the Toolbars tab.) and making
sure that the Preserve cell formatting check box is selected under Data
formatting and layout.
To preserve sorting, copy the data Each time you refresh an external data
range, Excel automatically removes any sorting you applied. If you want to
sort or format data from an external data range and keep all sorting and
formatting, copy the data, and then use the Paste Special command and select
the Values option to paste the data onto another *** in the workbook. Then
format the data the way you want. The sorting and formatting will be
preserved; however, you won't be able to refresh the data because the
underlying query (query: In Query or Access, a means of finding the records
that answer a particular question you ask about the data stored in a
database.) associated with the external data range was not copied.
Record a macro to restore sorting and formatting If you want to be able
to refresh an external data range and keep your sorting and formatting, try
recording a macro for formatting the data in your external data range and
then running the macro after you refresh the data. On the Tools menu, point
to Macro, and then click Record New Macro. Specify the options you want, and
click OK. Format the external data range the way you want, and then click the
Stop Macro button on the Stop Recording toolbar. Run the macro after you
refresh the data.
How?
Set the security level to Medium or Low.
How?
On the Tools menu, click Options.
Click the Security tab.
Under Macro Security, click Macro Security.
Click the Security Level tab, and then select the security level you want to
use.
Open the workbook that contains the macro (macro: An action or a set of
actions you can use to automate tasks. Macros are recorded in the Visual
Basic for Applications programming language.).
On the Tools menu, point to Macro, and then click Macros.
In the Macro name box, enter the name of the macro you want to run.
Do one of the following:
Run a macro in a Microsoft Excel workbook
Click Run.
If you want to interrupt, press ESC.
Run a macro from a Microsoft Visual Basic module
Click Edit.
Click Run Sub/UserForm .
Tip
If you want to run a different macro while you are in the Visual Basic
Editor (Visual Basic Editor: An environment in which you write new and edit
existing Visual Basic for Applications code and procedures. The Visual Basic
Editor contains a complete debugging toolset for finding syntax, run-time,
and logic problems in your code.), click Macros on the Tools menu. In the
Macro name box, enter the name of the macro you want to run, and then click
Run.
A range of blank cells is selected when I return data to Microsoft Excel.
Check your ODBC driver You may not be using a compatible ODBC driver
(Open Database Connectivity (ODBC) driver: A program file used to connect to
a particular database. Each database program, such as Access or dBASE, or
database management system, such as SQL Server, requires a different
driver.). If you're using an ODBC driver from an earlier version of Microsoft
Excel or Query, you must install the most recent version of the driver to
import data.
Check your system's free memory Your computer might not have enough
memory available to import the data. To check available memory in Windows
2000, switch to the Windows desktop. Right-click the My Computer icon, click
Properties, and then click the Advanced tab. Click Performance Options, and
then click Change to see the percentage of available memory. To free some
memory, try closing unnecessary documents and applications.
Check whether Excel is ignoring other programs The Ignore other
applications check box may be selected in Excel. This option may prevent
other programs, including Microsoft Query, from establishing a dynamic data
exchange (DDE) (Dynamic Data Exchange (DDE): An established protocol for
exchanging data between Microsoft Windows-based programs.) connection to
Excel. On the Tools menu, click Options, and then click the General tab.
Under Settings, make sure that the Ignore other applications check box is
cleared. Then run the query again.
I run out of disk space when I try to import data.
Determine how much space is needed When you create a query (query: In Query
or Access, a means of finding the records that answer a particular question
you ask about the data stored in a database.), the query is temporarily
placed on your hard disk. As a general rule, you should have a minimum of 3
to 5 MB of available disk space to create the temporary query file. If your
query is large, you will need more free disk space. If enough disk space is
not available, the query will take longer to retrieve data or the query may
quit running.
Check for available disk space To check available hard disk space in
Windows 2000, switch to the Windows desktop, double-click the My Computer
icon, and then click the disk you want to check. On the File menu, click
Properties. To free some space on your hard disk, try emptying the Recycle
Bin, backing up unneeded files and then removing them from your hard disk, or
removing Windows components that you don't use. For more information about
freeing hard disk space, see Microsoft Windows Help.
Strategies you can try when disk space is limited If you have only a
limited amount of space available on your hard disk, try the following:
Simplify your query Make sure you include only those tables (table: A
collection of data about a particular subject that is stored in records
(rows) and fields (columns).) and fields (field: A category of information,
- References:
- RE: Import external data - web query
- From: challa prabhu
- Prev by Date: Re: Need formula Help soon
- Next by Date: Re: Weird Number in Excel 2003
- Previous by thread: RE: Import external data - web query
- Next by thread: Re: excel using 99% cpu usage
- Index(es):
|
http://www.tech-archive.net/Archive/Excel/microsoft.public.excel.misc/2007-05/msg01726.html
|
crawl-002
|
refinedweb
| 2,926
| 58.82
|
What’s new in Celery 3.0 (Chiastic Slide)¶.
Highlights¶
Overview
A new and improved API, that-flows.
Event-loop_FORCE_EXECV setting is enabled
by default if the event-loop isn’t used.
New
celery umbrella command¶
All Celery’s command-line programs are now available from a single celery umbrella command.
You can see a list of sub-commands and options by running:
$ celery help
Commands include:
celery worker(previously
celeryd).
celery beat(previously
celerybeat).
celery amqp(previously
camqadm).
The old programs are still available (
celeryd,
celerybeat, etc),
but you’re discouraged from using them.
Now depends on billiard¶.
Issue #625
Issue #627
Issue #640
django-celery #122 <
django-celery #124 <
celery.app.task no longer a package¶.
Last version to support Python 2.5¶ aren’t compatible with Celery versions prior to 2.5.
You can disable UTC and revert back to old local time by setting
the
CELERY_ENABLE_UTC setting.
Redis: Ack emulation improvements¶
Reducing the possibility of data loss.
Acks are now implemented by storing a copy of the message when the message is consumed. The copy isn’t removed until the consumer acknowledges or rejects it.
This means that unacknowledged messages will be redelivered either when the connection is closed, or when the visibility timeout is exceeded.
-
Visibility timeout
This is a timeout for acks, so that if the consumer doesn’t ack the message within this time limit, the message is redelivered to another consumer.
The timeout is set to one hour by default, but can be changed by configuring a transport option:BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 18000} # 5 hours
Note
Messages that haven’ll take a long time for messages to be redelivered in the event of a power failure, but if so happens you could temporarily set the visibility timeout lower to flush out messages when you start up the systems again.
News¶
Chaining Tasks¶and
link_errorkeywordinstances.
AsyncResult.iterdeps
Recursively iterates over the tasks dependencies, yielding (parent, node) tuples.
Raises IncompleteStream if any of the dependencies hasn’t returned yet.
AsyncResult.graph
A
DependencyGraphof the tasks dependencies. With this you can also convert to dot format:
with open('graph.dot') as fh: result.graph.to_dot(fh)
then produce an image of the graph:
$ dot -Tpng graph.dot -o graph.png
A new special subtask called
chain)
Redis: Priority support¶ isn’t as reliable as priorities on the server side, which is why the feature is nicknamed “quasi-priorities”; Using routing is still the suggested way of ensuring quality of service, as client implemented priorities fall short in a number of ways, for example.
Redis: Now cycles queues so that consuming is fair¶ together, since it was very difficult to migrate the
TaskSetclass to become a subtask.
A new shortcut has been added to tasks:
>>> task.s(arg1, arg2, kw=1)
as a shortcut to:
>>> task.subtask((arg1, arg2), {'kw': 1})
Tasks can be chained by using the
>>> (add.s(2, 2), pow.s(2)).apply_async()
Subtasks can be “evaluated” using the
>>> ~add.s(2, 2) 4 >>> ~(add.s(2, 2) | pow.s(2))
is the same as:
>>> chain(add.s(2, 2), pow.s(2)).apply_async().get()
A new subtask_type key has been added to the subtask dictionary.)
New remote control commands¶
These commands were previously experimental, but they’ve proven stable and is now documented as part of the officialenabled.control’s can now be immutable, which means that the arguments
won’t be modified when calling callbacks:
>>> chain(add.s(2, 2), clear_static_electricity.si())
means it’ll not receive the argument of the parent task,
and
.si() is a shortcut to:
>>> clear_static_electricity.subtask(immutable=True)
Logging Improvements¶’s used throughout.
All loggers inherit from a common logger called “celery”.
Before
task.get_loggerwould setup a new logger for every task, and even set the log level. This is no longer the case.
Instead all task loggers now inherit from a common “celery.task” logger that’s set up when programs call setup_logging_subsystem.
Instead of using LoggerAdapter to augment the formatter with the task_id and task_name field, the task base logger now use a special formatter adding these values at run-time from the currently executing task.
In fact,
task.get_logger
Task registry no longer global¶
Abstract tasks are now lazily bound¶>>
Lazy task decorators¶
The
@task decorator is now lazy when used with custom apps.
That is, if
accept_magic_kwargs is enabled (her by called “compat mode”), the task
decorator executes inline like before, however for custom apps the @task
decorator now returns a special PromiseProxy object that module named celery’, and get the celery attribute from that module.
For example, if you have a project named
proj where the
celery app is located in
from’t be signaled (Issue #595).
Contributed by Brendon Crawford.
Redis event monitor queues are now automatically deleted (Issue #436).
App instance factory methods have been converted to be cached descriptors that creates a new subclass on access.
For example, this meanscommand, which is an entry-point for all other commands.
The main for this command can be run by calling
celery.start().
Annotations now supports decorators if the key starts with ‘@’.
For example:
def debug_args(fun): @wraps(fun) def _inner(*args, **kwargs): print('ARGS: %r' % (args,)) return _inner CELERY_ANNOTATIONS = { 'tasks.add': {'@__call__': debug_args}, }
Also tasks are now always bound by class so that annotated methods end up being bound.
Bug-report – for example with_FORCE_EXECVis now enabled by default.
If the old behavior is wanted the setting can be set to False, or the new –no-execv option to celery worker.
Deprecated module
celery.confhas been removed.
The
CELERY_TIMEZONEnow always require the pytz library to be installed (except if the timezone is set to UTC).
The Tokyo Tyrant backend has been removed and is no longer supported.
Now uses
maybe_declare()to cache queue declarations.
There’s no longer a global default for the
CELERYBEAT_MAX_LOOP_INTERVALsetting, it is instead set by individual schedulers.
Worker: now truncates very long message bodies in error reports.
No longer deep-copies/Beat no longer logs the start-upon the remote control command queue twice.
Probably didn’t cause any problems, but was unnecessary.
Internals¶
app.broker_connectionis now
app.connection
Both names still work.
Compatibility modules are now generated dynamically upon use.
These modules are
celery.messaging,
celery.log,
celery.decoratorsand
celery.registry.
celery.utilsrefactored into multiple modules:
Now using
kombu.utils.encodinginstead.
Experimental¶
celery.contrib.methods: Task decorator for methods¶.
Unscheduled Removals¶
Deprecation Time-line Changes¶
See the Celery Deprecation Time-line. don’t modify anything, while idempotent control commands that make changes are on the control objects.
Fixes¶
Retry SQLAlchemy backend operations on DatabaseError/OperationalError (Issue #634)
Tasks that called
retrywasn’t acknowledged if acks late was enabled
Fix contributed by David Markey.
The message priority argument wasn’t properly propagated to Kombu (Issue #708).
Fix contributed by Eran Rundstein
|
https://docs.celeryq.dev/en/latest/history/whatsnew-3.0.html
|
CC-MAIN-2022-21
|
refinedweb
| 1,134
| 51.14
|
In some applications (usually real-time applications), a number of
processes perform a series of tasks. In such applications, the sequence
in which a process executes can be controlled or synchronized by means
of process priority. The basic method of synchronization by priority
involves executing the process with the highest priority while
preventing the other application processes from executing.
If you use process priority for synchronization, be aware that if the
higher-priority process is blocked, either explicitly or implicitly
(for example, when doing an I/O), the lower-priority processes can run,
resulting in corruption on the data of the higher process's activities.
Because each processor in a multiprocessor system, when idle, schedules
its own work load, it is impossible to prevent all other processes in
the system from executing. Moreover, because the scheduler guarantees
only that the highest-priority and computable process is scheduled at
any given time, it is impossible to prevent another process in an
application from executing.
Thus, application programs that synchronize by process priority must be
modified to use a different serialization method to run correctly in a
multiprocessor system.
6.7.4 Synchronizing Multiprocess Applications
The operating system provides the following techniques to synchronize
multiprocess applications:
The operating system provides basic event synchronization through event
flags. Common event flags can be shared among cooperating processes
running on a uniprocessor or in an SMP system, though the processes
must be in the same user identification code (UIC) group. Thus, if you
have developed an application that requires the concurrent execution of
several processes, you can use event flags to establish communication
among them and to synchronize their activity. A shared, or common,
event flag can represent any event that is detectable and agreed upon
by the cooperating processes. See Section 6.8 for information about
using event flags.
The lock management system services---Enqueue Lock Request (SYS$ENQ),
and Dequeue Lock Request (SYS$DEQ)---provide multiprocess
synchronization tools that can be requested from all access modes. For
details about using lock management system services, see Chapter 7.
Synchronization of access to shared data by a multiprocess application
should be designed to support processes that execute concurrently on
different members of an SMP system. Applications that share a global
section can use MACRO-32 interlocked instructions or the equivalent in
other languages to synchronize access to data in the global section.
These applications can also use the lock management system services for
synchronization.
Most application programs that run on an operating system in a
uniprocessor system also run without modification in a multiprocessor
system. However, applications that access writable global sections or
that rely on process priority for synchronizing tasks should be
reexamined and modified according to the information contained in this
section.
In addition, some applications may execute more efficiently on a
multiprocessor if they are specifically adapted to a multiprocessing
environment. Application programmers may want to decompose an
application into several processes and coordinate their activities by
means of event flags or a shared region in memory.
6.7.5 Synchronization Using Locks
A spin lock is a device used by a processor to
synchronize access to data that is shared by members of a symmetric
multiprocessing (SMP) system. A spin lock enables a set of processors
to serialize their access to shared data. The basic form of a spin lock
is a bit that indicates the state of a particular set of shared data.
When the bit is set, it shows that a processor is accessing the data. A
bit is either tested and set or tested and cleared; it is atomic with
respect to other threads of execution on the same or other processors.
A processor that needs access to some shared data tests and sets the
spin lock associated with that data. To test and set the spin lock, the
processor uses an interlocked bit-test-and-set instruction. If the bit
is clear, the processor can have access to the data. This is called
locking or acquiring the spin lock. If the bit is set, the processor
must wait because another processor is already accessing the data.
Essentially, a waiting processor spins in a tight loop; it executes
repeated bit test instructions to test the state of the spin lock. The
term spin lock derives from this spinning. When the spin lock is in a
loop, repeatedly testing the state of the spin lock, the spin lock is
said to be in a state of busy wait. The busy wait ends when the
processor accessing the data clears the bit with an interlocked
operation to indicate that it is done. When the bit is cleared, the
spin lock is said to be unlocked or released.
Spin locks are used by the operating system executive, along with the
interrupt priority level (IPL), to control access to system data
structures in a multiprocessor system.
6.7.6 Writable Global Sections
A writable global section is an area of memory that can be accessed
(read and modified) by more than one process. On uniprocessor or SMP
systems, access to a single global section with an appropriate read or
write instruction is atomic on OpenVMS systems. Therefore, no other
synchronization is required.
An appropriate read or write on VAX systems is an instruction that is a
naturally aligned byte, word, or longword, such as a MOVx
instruction, where x is a B for a byte, W for a word, or L for
a longword. On Alpha systems, an appropriate read or write instruction
is a naturally aligned longword or quadword, for instance, an
LDx or write STx instruction where x is an L
for an aligned longword or Q for an aligned quadword.
On multiprocessor systems, for a read-modify-write sequence on a
multiprocessor system, two or more processes can execute concurrently,
one on each processor. As a result, it is possible that concurrently
executing processes can access the same locations simultaneously in a
writable global section. If this happens, only partial updates may
occur, or data could be corrupted or lost, because the operation is not
atomic. Unless proper interlocked instructions are used on VAX systems
or load-locked/store-conditional instructions are used on Alpha
systems, invalid data may result. You must use interlocked or
load-locked/store-conditional instructions, their high-level language
equivalents, or other synchronizing techniques, such as locks or event
flags.
On a uniprocessor or SMP system, access to multiple locations within a
global section with read or write instructions or a read-modify-write
sequence is not atomic. On a uniprocessor system, an interrupt can
occur that causes process preemption, allowing another process to run
and access the data before the first process completes its work. On a
multiprocessor system, two processes can access the global section
simultaneously on different processors. You must use a synchronization
technique such as a spin lock or event flags to avoid these problems.
Check existing programs that use writable global sections to ensure
that proper synchronization techniques are in place. Review the program
code itself; do not rely on testing alone, because an instance of
simultaneous access by more than one process to a location in a
writable global section is rare.
If an application must use queue instructions to control access to
writable global sections, ensure that it uses interlocked queue
instructions.
6.8 Using Event Flags
Event flags are maintained by the operating system for general
programming use in coordinating thread execution with asynchronous
events. Programs can use event flags to perform a variety of signaling
functions. Event flag services clear, set, and read event flags. They
also place a thread in a wait state pending the setting of an event
flag or flags.
Table 6-2 shows the two usage styles of event flags.
The wait form of system services is a variant of asynchronous services;
there is a service request and then a wait for the completion of the
request. For reliable operation in most applications, WAIT form
services must specify a status block. The status prevents the service
from completing prematurely and also provides status information.
6.8.1 General Guidelines for Using Event Flags
Explicit use of event flags follows these general steps:
Implicit use of event flags may involve only step 4, or steps 1, 4, and
5.
Use run-time library routines and system services to accomplish these
event flag tasks. Table 6-3 summarizes the event flag routines and
services.
Some system services set an event flag to indicate the completion or
the occurrence of an event; the calling program can test the flag.
Other system services use event flags to signal events to the calling
process, such as SYS$ENQ(W), SYS$QIO(W), or SYS$SETIMR.
6.8.2 Introducing Local and Common Event Flag Numbers and Event Flag Clusters
Each event flag has a unique number; event flag arguments in system
service calls refer to these numbers. For example, if you specify event
flag 1 in a call to the SYS$QIO system service, then event flag 1 is
set when the I/O operation completes.
To allow manipulation of groups of event flags, the flags are ordered
in clusters of 32 numbers corresponding to bits 0 through 31
(<31:0>) in a longword. The clusters are also numbered from 0 to
4. The range of event flag numbers encompasses the flags in all
clusters: event flag 0 is the first flag in cluster 0, event flag 32 is
the first flag in cluster 1, and so on.
Event flags are divided into five clusters: two for local event flags
and two for common event flags. There is also a special local cluster 4
that supports EFN 128.
Table 6-4 summarizes the ranges of event flag numbers and the
clusters to which they belong.
The same system services manipulate flags in either local and common
event flag clusters. Because the event flag number implies the cluster
number, you need not specify the cluster number when you call a system
service that refers to an event flag.
When a system service requires an event flag cluster number as an
argument, you need only specify the number of any event flag in the
cluster. Thus, to read the event flags in cluster 1, you could specify
any number in the range 32 through 63.
Event flag 0 is the default event flag. Whenever a process requests a
system service with an event flag number argument, but does not specify
a particular flag, event flag 0 is used. Therefore, event flag 0 is
more likely than other event flags to be used incorrectly for multiple
concurrent requests.
Code that uses any event flag should be able to tolerate spurious sets,
assuming that the only real danger is a spurious clear that causes a
thread to miss an event. Since any system service that uses an event
flag clears the flag, there is a danger that an event which has occured
but has not been responded to is masked which can result in a hang. For
further information, see the SYS$SYNCH system service in HP OpenVMS System Services Reference Manual: GETUTC--Z.
6.8.4 Using EFN$C_ENF Local Event Flag
The combination of EFN$C_ENF and a status block should be used with the
wait form of system services, or with SYS$SYNCH system service.
EFN$C_ENF does not need to be initialized, nor does it need to be
reserved or freed. Multiple threads of execution may concurrently use
EFN$C_ENF without interference as long as they use a unique status
block for each concurrent asynchronous service. When EFN$C_ENF is used
with explicit event flag system services, it performs as if always set.
You should use EFN$C_ENF to eliminate the chance for event flag overlap.
6.8.5 Using Local Event Flags
Local event flags are automatically available to each program. They are
not automatically initialized. However, if an event flag is passed to a
system service such as SYS$GETJPI, the service initializes the flag
before using it.
When using local event flags, use the event flag routines as follows:
The following Fortran example uses LIB$GET_EF to choose a local event
flag and then uses SYS$CLREF to set the event flag to 0 (clear the
event flag). (Note that run-time library routines require an event flag
number to be passed by reference, and system services require an event
flag number to be passed by value.)
INTEGER FLAG,
2 STATUS,
2 LIB$GET_EF,
2 SYS$CLREF
STATUS = LIB$GET_EF (FLAG)
IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS))
STATUS = SYS$CLREF (%VAL(FLAG))
IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS))
Local event flags are used most commonly with other system services.
For example, you can use the Set Timer (SYS$SETIMR) system service to
request that an event flag be set either at a specific time of day or
after a specific interval of time has passed. If you want to place a
process in a wait state for a specified period of time, specify an
event flag number for the SYS$SETIMR service and then use the Wait for
Single Event Flag (SYS$WAITFR) system service, as shown in the C
example that follows:
.
.
.
main() {
unsigned int status, daytim[1], efn=3;
/* Set the timer */
status = SYS$SETIMR( efn, /* efn - event flag */
&daytim, /* daytim - expiration time */
0, /* astadr - AST routine */
0, /* reqidt - timer request id */
0); /* flags */
if ((status & 1) != 1)
LIB$SIGNAL( status );
.
.
.
/* Wait until timer expires */
status = SYS$WAITFR( efn );
if ((status & 1) != 1)
LIB$SIGNAL( status );
.
.
.
}
In this example, the daytim argument refers to a
64-bit time value. For details about how to obtain a time value in the
proper format for input to this service, see Chapter 27.
6.8.6 Using Common Event Flags
Common event flags are manipulated like local event flags. However,
before a process can use event flags in a common event flag cluster,
the cluster must be created. The Associate Common Event Flag Cluster
(SYS$ASCEFC) system service creates a named common event flag cluster.
By calling SYS$ASCEFC, other processes in the same UIC group can
establish their association with the cluster so they can access flags
in it. Each process that associates with the cluster must use the same
name to refer to it; the SYS$ASCEFC system service establishes
correspondence between the cluster name and the cluster number that a
process assigns to the cluster.
The first program to name a common event flag cluster creates it; all
flags in a newly created cluster are clear. Other processes on the same
OpenVMS cluster node that have the same UIC group number as the creator
of the cluster can reference the cluster by invoking SYS$ASCEFC and
specifying the cluster name.
Different processes may associate the same name with different common
event flag numbers; as long as the name and UIC group are the same, the
processes reference the same cluster.
Common event flags act as a communication mechanism between images
executing in different processes in the same group on the same OpenVMS
cluster node. Common event flags are often used as a synchronization
tool for other, more complicated communication techniques, such as
logical names and global sections. For more information about using
event flags to synchronize communication between processes, see
Chapter 3.
If every cooperating process that is going to use a common event flag
cluster has the necessary privilege or quota to create a cluster, the
first process to call the SYS$ASCEFC system service creates the cluster.
The following example shows how a process might create a common event
flag cluster named COMMON_CLUSTER and assign it a cluster number of 2:
.
.
.
#include <descrip.h>
.
.
.
unsigned int status, efn=65;
$DESCRIPTOR(name,"COMMON_CLUSTER"); /* Cluster name */
.
.
.
/* Create cluster 2 */
status = SYS$ASCEFC( efn, &name, 0, 0);
Other processes in the same group can now associate with this cluster.
Those processes must use the same character string name to refer to the
cluster; however, the cluster numbers they assign do not have to be the
same.
|
http://h41379.www4.hpe.com/doc/82final/5841/5841pro_020.html
|
CC-MAIN-2016-44
|
refinedweb
| 2,688
| 50.06
|
The Kansas City Standard
I was pondering my laser transmitter the other day, and began to think of how I might transmit digital information from the Arduino to the remote receiver. Since I am old, I remember the old days where programs used to be stored on an obsolete audio storage medium called cassette tape. Indeed, the first storage device I ever used was the Atari 410 tape drive pictured on the right.
The Atari stored data at 600 baud, using FSK (data is stored as short bursts of two different tones, in the case of the Atari 3995 and 5327 hz), using a variant of the so-called Kansas City Standard. KCS was a standard for storing data on cassettes that was developed at a Byte magazine sponsored symposium in 1975.
Data is converted to short bursts of 1200Hz and 2400hz tones to represent zeroes and ones, respectively. Each burst is 1/300 of a second long, meaning that it sends 300 bits per second. Each 8 bit character is framed by a 0 start bit, and a pair of 1 stop bits so that character frames can be identified. It was designed to be a reliable if slow format, and succeeded on both counts. It transmits about 27 characters per second. An 8K download would take about five minutes.
It’s amazing we ever lived through the stone age.
Anyway, I thought it would be fun to make an Arduino library to send this information over my laser link, but first I decided that it would be good to test to make sure I understood how the format worked. So, I coded up a small program to generate some test .WAV files from an input data file. I made the problem simpler by generating the output at the somewhat non-standard sample rate of 9600 samples per second. This considerably simplifies the generation of samples, since they only would have amplitudes of zero, plus or minus one, and plus or minus sqrt(2)/2. I coded up the following C code, and generated this WAV file.
A WAV file that contains an ASCII test message, encoded in Kansas City Standard
The encoder is simple, the decoder, somewhat less so. So, to test that I was generating the proper bits, I used Martin Ward’s decoder written in Perl which did a good job of decoding the sample WAV files. I haven’t tested the robustness of this format with respect to noise yet, but it does appear to work reasonably well.
It wouldn’t be that hard to modify the simple sound generation code I used before to send data in this format. I think I will try to get to that next week.
#include <stdio.h> #include <stdlib.h> #include <math.h> #include <sndfile.h> /* * kc.c * A program which takes a file as input, and encodes it via the old * Kansas City Standard - a 300 baud format that was used by old * microcomputers to store data onto cassette tape. * * * * We are going to produce a 9600 sample per second output file... * * Each "baud" is 32 samples long. * * A '0' is 4 cycles @ 1200 Hz. * A '1' is 8 cycles @ 2400 Hz. * * 0 - 0 R2 1 R2 0 -R2 -1 -R2 * 1 - 0 1 0 -1 * */ #define R2 (.70710678118654752440f) SNDFILE *sf ; SF_INFO sfinfo ; void output(float f) { sf_write_float(sf, &f, 1) ; } void send(int bit) { int c ; switch (bit) { case 0: for (c=0; c<4; c++) { output(0.f) ; output(R2) ; output(1.f) ; output(R2) ; output(0.f) ; output(-R2) ; output(-1.f) ; output(-R2) ; } break ; case 1: for (c=0; c<8; c++) { output(0.f) ; output(1.f) ; output(0.f) ; output(-1.f) ; } break ; default: abort() ; } } void encode(int ch) { int i ; send(0) ; /* start bit... */ for (i=0; i<8; i++) { send(ch & 1) ; ch = ch >> 1 ; } send(1) ; /* two stop bits */ send(1) ; } main() { int i, ch ; sfinfo.channels = 1 ; sfinfo.samplerate = 9600 ; sfinfo.format = SF_FORMAT_WAV | SF_FORMAT_PCM_16 ; sf = sf_open("test.wav", SFM_WRITE, &sfinfo) ; for (i=0; i<9600/4; i++) output(0.) ; while ((ch = getchar()) != EOF) encode(ch) ; for (i=0; i<9600/4; i++) output(0.) ; sf_close(sf) ; }
Comment from Mark
Time 7/22/2011 at 7:18 pm
I was surprised too. I have been unable to find details on the AC performance of photovoltaic cells, at least in a way that makes sense. I suspect they are limited by some capacitance which probably increases with cell area. I can imagine using the Arduino to generate some test signals and see what happens.
Comment from Eric Smith
Time 7/22/2011 at 8:08 pm
That 27 characters per second was a huge improvement from the 10 cps paper tape reader on the ASR33. Then I used an Apple II, which had a variable speed cassette interface with an average of about 187 cps. We live in an era of amazing speed!
Comment from Eric Baker
Time 7/22/2011 at 9:45 pm
Mark,
I wrote some code for the Arduino to transmit PSK31 and RTTY50 using a software DDS.
You could probably hook up the laser the arduino, have your little Radio Shack amplified speaker reproduce the signal and feed it into you favorite amateur radio digital mode software through a microphone. (Or figure out how to connect the solar cell to the sound card)
Anyhow, here is the link:
The audio it produces has a lot of sidebands and thus clicky, probably due to lack of raised cosine pulse shaping.
Enjoy!
73
Eric (WY7A, formerly WY7USA)
Comment from Matt
Time 7/23/2011 at 6:43 am
Mark,
I’m often surprised at the timeliness of some of your posts. Just this past week, I was pondering how to archive some cassette based software I have for my Tandy Model 100/200 computers. Naturally, recording them on my desktop seems to be the way to go.
Your discussion of encoder and decoder above is right up my alley at the moment. Thanks for sharing!
Comment from PP5VX (Bone)
Time 5/29/2012 at 9:33 am
Nice post !
Loved to make some great “buzzy sounds”
like do my past ZX-81, and a STANDARD
PACKET (1200b) and PSK31, on my Arduino
2009 Board.
Changing times… heeee ? (hi)
TNX for the best sharing of it !
73/DX from,
PP5VX (Bone)
So. Brazil
Comment from Chris Johnson
Time 7/22/2011 at 4:49 pm
Great stuff. I was interested to see in your previous video that the response time of the solar cell was quick enough to enable voice transmission (for some reason I had assumed that solar cells, like LDRs, would be very slow). Do you have any idea what bandwidth is actually possible?
|
http://brainwagon.org/2011/07/22/the-kansas-city-standard/
|
CC-MAIN-2014-10
|
refinedweb
| 1,129
| 72.76
|
Importing Assets Directly into Files
There are two major ways to import assets, such as images, fonts, and files, into a Gatsby site. The default path is to import the file directly into a Gatsby template, page, or component. The alternative path, which makes sense for some edge cases, is to use the static folder.
Importing assets with webpack
With webpack you can
import a file right in a JavaScript module. This tells webpack to include that file in the bundle. Unlike CSS imports, importing a file gives you a string value. This imported file:
svg,
jpg,
jpeg,
png,
gif,
mp4,
webm,
wav,
mp3,
m4a,
aac, and
oga.
Here’s an example:
This ensures that when the project is built, webpack will correctly move the images into the public folder, and provide us with correct paths.
You can reference files in CSS to import them,.
If you’re using SCSS the imports are relative to the entry SCSS file.
Please be advised that this is also a custom feature of webpack.
Video hosted on egghead.io.
Additional resources
- More on using an imported font.
Querying for a
File in GraphQL using gatsby-source-filesystem
You can also import files using GraphQL by querying for them in your data layer, which will trigger copying of those files to the public directory. Querying for the
publicURL field of
File nodes will provide URLs you can use in your JavaScript components, pages and templates.
Examples
- Copy all.
|
https://www.gatsbyjs.com/docs/importing-assets-into-files/
|
CC-MAIN-2020-40
|
refinedweb
| 244
| 73.27
|
How to start a session bean as soon as it is deployedMasoodah Ahmed Feb 1, 2008 2:10 PM
Hi,
I want a session bean or other to start as soon as I deploy the application. Basically my need is to access a certain table in the database periodically using @Timeout Timer Serivce.
My bean looks like so
@Stateless
public class AdSetScheduleBean implements AdSetScheduleRemote, AdSetScheduleLocal {
// Add business logic below. (Right-click in editor and choose
// "EJB Methods > Add Business Method" or "Web Service > Add Operation")
public AdSetScheduleBean() {
}
@PostConstruct
public void init() {
System.out.println("I am in Postconstructor of AdSetScheduleBean");
}
@Resource TimerService timerService;
//Method to be called on timer expiration
@Timeout
public void CheckAdSetSchedule( Timer timer ){
// Define method here
}
public void startTimer(){
System.out.println("I am in startTimer");
long timeNow = System.currentTimeMillis();
long interval = 5;
timerService.createTimer(timeNow, interval, null);
}
}
How do I wake this bean on startup and have it check Status ar regular intervals.
Thanks
1. Re: How to start a session bean as soon as it is deployedRagav Gomatam Feb 1, 2008 9:35 PM (in response to Masoodah Ahmed)
Use ejb Timer Service
2. Re: How to start a session bean as soon as it is deployedMasoodah Ahmed Feb 2, 2008 8:37 AM (in response to Masoodah Ahmed)
I think I m using ejb timer service.
Pl. see my code.
Can you provide me with an example please.
Thanks
3. Re: How to start a session bean as soon as it is deployedOskar Carlstedt Feb 3, 2008 4:00 PM (in response to Masoodah Ahmed)
Hi All!
Important to know. An ejb is NOT started just because it is deployed. It is just available to a "client" through the application server. There is no method invoked on an ejb during or just after the deploy.
Ok, there are ways to go around this. Add a service to your ejb that is invoking your ejb. Use the @Service-annotation and then you get methods like start(), stop() create() and destroy().
Kind regards!
/Oskar
4. Re: How to start a session bean as soon as it is deployedCarlo de Wolf Feb 4, 2008 9:37 AM (in response to Masoodah Ahmed)
I'm planning to change this behavior in light of EJB 3.1. The supported way will be to use @PostConstruct in a @Singleton.
So for now I would say use @PosConstruct in a @Service bean.
Note that the create, start, stop, destroy methods will become deprecated and might disappear completely.
5. Re: How to start a session bean as soon as it is deployedMasoodah Ahmed Feb 4, 2008 10:01 AM (in response to Masoodah Ahmed)
Hi All,
I got this problem resolved for EJB3 by creating a sar which calls the SessionBean and packaging with my ear as well explained in this link.
When EJB3.1 is officially released I can think of other options.
Thanks for all your help.
6. Re: How to start a session bean as soon as it is deployedkumar reddy Apr 1, 2008 8:38 AM (in response to Masoodah Ahmed)
Hi oskar/wolfc
I have tried both options suggested by you but it is not working.
Following is the code of my stateless bean
@Service @Stateless @Remote(ConfigService.class) @Local(ConfigService.class) public class ConfigServiceBean implements ConfigService { @PostConstruct public void test(){ System.out.println("Called test...."); } /** *some methods inside this bean **/ }
The method test() is not getting called when it is deployed.
Server i am using is Jboss 4.2.1 GA.
The method is getting called when first request is sent to the bean.
Please let me know if i miss something.
Please help me.
Thanks,
Pavan
7. Re: How to start a session bean as soon as it is deployedOskar Carlstedt Apr 1, 2008 9:26 AM (in response to Masoodah Ahmed)
Hi!
When annotating with the service you have four life cycle methods that you can implement.
*create
*start
*stop
*destroy
Try call test from start and you'll se that it will be called.
Cheers
/Oskar
8. Re: How to start a session bean as soon as it is deployedOskar Carlstedt Apr 1, 2008 9:36 AM (in response to Masoodah Ahmed)
Here is a short eaxample:
@Service @Management(MyServiceManagement.class) @Depends("jboss.ws:service=DeployerInterceptorEJB3") public class MyService implements MyServiceManagement { private Timer timer = null; @Resource private EJBContext ejbContext; public void start() throws Exception { timer = ejbContext.getTimerService().createTimer(1000, "Say Hello!!"); } @Timeout public void timeout(Timer timer) { System.out.println(timer.getInfo()); timer = ejbContext.getTimerService().createTimer(1000, "Say Hello again!!"); } public void stop() { timer.cancel(); } } public interface MyServiceManagement { public void start() throws Exception; public void stop() throws Exception; }
/Oskar
|
https://developer.jboss.org/message/363599
|
CC-MAIN-2019-04
|
refinedweb
| 776
| 64.61
|
Gatsbyjs is a static site generator that uses React as front end library to build sites and web applications. There are a lot of modern paradigms that Gatsby takes care for its developer behind the scenes to start building and launch their project.
Another thing I like about Gatsbyjs is its ever-growing data plugin ecosystem. It lets a developer fetch data directly into a Gatsby generated application using GraphQL. If you decide to use Gatsby you will be enjoying the power of the latest web technologies such as React.js, Webpack, GraphQL, and Node.js.
In this tutorial we are going to setup a simple Gatsbyjs site to see how it is scaffoled and build.
Advantages of using Gatsbyjs
There are many advantages of choosing GatsbyJS to build your next website over other existing stacks and static generators. Here are some of them.
- HTML code is generated server side
- Pre-configured Webpack based build system, thus, no need to spend time on this if you do not want too
- Optimized for speed. Gatsby loads only critical parts, so that your site loads as fast as possible. Once loaded, Gatsby prefetches resources for other pages so that clicking on the site feels incredibly fast.
- Easily extensible by plugin ecosystem
- Automatic routing based on your directory structure. Thus, no need for using a separate routing library
Installing the GATSBYJS CLI
We will be using npm to install our first and basic tool that we need to setup any Gatsbyjs project. This is a CLI tool that is recommended by GatsbyJS team to start with a new project. You can use yarn too.
In your terminal, type the following code:
npm install –global gatsby-cli
Yes, the CLI tool must be installed as a global dependency. To start a new project type:
gatsby new first-gatsby-site
After the installation and setuping up a new project is complete, we can run the project to see if it is working correctly. Run gatsby develop from terminal to see the site running live at. In your browser window, the default Gatsbyjs application looks like this.
Leave the command running since it enables Hot Reloading. Now any change we make to our project will be reflected directly, without refreshing the page. Currently, this application contains two pages. Hence, the bare minimum routing is already done for us.
Project Structure
Every Gatsby project contains at least these files. This might be familiar since directories such as node_modules, public, are common paradigms over the web and JavaScript community. It also contains package.json, that holds the metadata of any modern Javascript application.
Our concern are the directory src and file gatsby-config.js. The directory contains files related to our current application and the other contains metadata information.'; const Header = () => ( <div style={{ background: 'rebeccapurple', marginBottom: '1.45rem' }} > <div style={{ margin: '0 auto', maxWidth: 960, padding: '1.45rem 1.0875rem' }} > <h1 style={{ margin: 0 }}> <Link to="/" style={{ color: 'white', textDecoration: 'none' }} > Gatsby </Link> </h1> </div> </div> ); const TemplateWrapper = ({ children }) => ( <div> <Helmet title="My First Gatsby Site" meta={[ { name: 'author', content: 'amanhimself' }, { name: 'keywords', content: 'sample, something' } ]} /> <Header /> <div style={{ margin: '0 auto', maxWidth: 960, padding: '0px 1.0875rem 1.45rem', paddingTop: 0 }} > {children()} </div> </div> ); TemplateWrapper.propTypes = { children: PropTypes.func }; export default TemplateWrapper;
Since it uses Reactjs as the core for writing applications, you need to be familiar with it, especially the JSX to understand the above code. The Header component contains the styles and markup that is currently serving as the header of our application. It is reflected on every page by TempplateWrapper which is our main layout component in the application.
The Link tag you are seeing is the way Gatsbyjs let the users navigate from one page to another. It is required from gatsby-link library that comes with our Gatsbyjs project as a dependency. The react-helmet library that serves the purpose of attaching header information in HTML. Just like react-helmet you can use other react dependencies in a Gatsbyjs project.
The App Pages
The pages/ contain rest of the pages that build up the application. They are plain React components. Let us take a look at the index.js file inside this directory that currently serves as the main page of our application.;
Similarly, you will find the code in page-2.js. In our browser window, we try to navigate to the second page, do ThridPage = () => ( <div> <h1>Third Page</h1> <p>This is my first Gtasby site</p> <Link to="/page-2/">Back to Page 2</Link> <br /> <Link to="/">Go back to the homepage</Link> </div> ); export default ThridPage;
Now, add the link of third page to the homepage. Open index.js file.
import React from ‘react’;
import Link from ‘gatsby-link’;
const IndexPage = () => (
Hi people
Welcome to your new Gatsby site.
Now go build something great.
Go to page 2
New Page!
);
export default IndexPage;
The Navbar
In this section, we will add a navbar or a navigation menu to visit all the three different pages in our application with ease. Open layouts/index.js and inside Header component, add the following.
fontSize: 'x-large' }} Home </Link> </li> <li style={{ display: 'inline-block', marginRight: '1rem' }}> <Link style={{ color: 'white', textDecoration: 'none', fontSize: 'x-large' }} Page 2 </Link> </li> <li style={{ display: 'inline-block', marginRight: '1rem' }}> <Link style={{ color: 'white', textDecoration: 'none', fontSize: 'x-large' }} Page 3 </Link> </li> </ul> </h1> </div> </div> );
It is nothing but JSX code all over it. On saving the file, the results are reflected immediately on the homepage and on every page.
Deploying this", }
We know that gatsby.config.js is the main manifesto file for any Gatsbyjs project. Thus, we need to add the pathname prefix of the repo like below.
module.exports = { siteMetadata: { title: `Gatsby Default Starter` }, pathPrefix: `/first-gatsby-site` };
From the terminal execute the following command.
npm run deploy
The site will be live on .
Informative blog! Thanks for sharing it.
|
https://blog.eduonix.com/video-tutorials/web-development-tutorials/beginners-guide-building-sites-gatsbyjs/
|
CC-MAIN-2020-45
|
refinedweb
| 992
| 64.2
|
Feature #16600open
Optimized opcodes for frozen arrays and hashes literals
Description
Context¶
A somewhat common pattern when trying to optimize a tight loop is to avoid allocations from some regular idioms such as a parameter default value being an empty hash or array, e.g.
def some_hotspot(foo, options = {}) # ... end
Is often translated in:
EMPTY_HASH = {}.freeze private_constant :EMPTY_HASH def some_hotspot(foo, options = EMPTY_HASH) # ... end
But there are many variations, such as
(something || []).each .., etc.
A search for that pattern on GitHub returns thousands of hits, and more specifically you'll often see it in popular gems such as Rails.
Proposal¶
I believe that the parser could apply optimizations when
freeze is called on a Hash or Array literal, similar to what it does for String literals (minus the deduplication).
I implemented it as a proof of concept for
[].freeze specifically, and it's not particularly complex, and I'm confident doing the same for
{}.freeze would be just as simple.
Potential followup¶
I also went a bit farther, and did another proof of concept that reuse that opcode for non empty literal arrays. Most of the logic needed to decided wether a literal array can be directly used already exist for the
duparray opcode.
So it short
opt_ary_freeze / opt_hash_freeze could substitute
duparray / duphash if
.freeze is called on the literal, and that would save an allocation. That isn't huge, but could be useful for things such as:
%i(foo bar baz).freeze.include?(value)
Or to avoid allocating hashes and arrays in pattern matching:
case value in { foo: 42 }.freeze # ... end
Updated by shevegen (Robert A. Heiler) over 2 years ago
I can not evaluate the speed/efficiency part, so from this point of view that may be
ok - I have no idea.
I believe the other part is a general design decision, though. Is the intention really
to encourage people to add .freeze all over everywhere? I am not sure about this. It
may be good to ask matz about his opinion either way.
A lot of code is e. g. in a style like this
EMPTY_Hash = { }.freeze EMPTY_Array = [ ].freeze EMPTY_String = ''.freeze
In particular the last part strikes me as odd. If you use e. g. frozen-string literals
in the comment section set to true, then why is .freeze still used in such .rb files?
Granted, that is 10 years old code by now, so we can not use it as a basis for evaluationof
current use of ruby really - but it just strikes me as strange in general. I am also aware
that this is used in a LOT of code bases out there, even aside from github; I saw this in
some ruby gems hosted at rubygems.org.
When I remember the old pickaxe book, it rarely showed such examples with .freeze. Literally
there was no recommendation for this being a "best practice". (Not that you state so either,
but when lots of people use something, you can't help but wonder why they do this.)
Is the intention really to have ruby users use so many .freeze in their ruby code? Is that
a idiom that should be the (or a) default?
I mean, I don't know for certain why it is used, but I suspect the most logical explanation
may be because people can squeeze out more "efficiency" (probably; such as using .freeze
on Strings in the past). Many examples are in the rails active* ecosystem, by the way, on
that github search result, and a lot of the results actually reside within files called
"bundler.rb" - so I would be a bit wary making too many assumptions in general based on
that since it will have some bias. Again, perfectly fine to wish to optimize code, as
matz said nobody minds more speed :) - but I would also evaluate the use case at hand and
the idioms, together, before considering making optimizations based on that, since I think
this is also coupled to a design decision/process (but I may be wrong).
I rarely use that idiom in my ruby code; probably others don't either, so you have to
wonder why that idiom originates, and whether it is a good idiom too.
Updated by Dan0042 (Daniel DeLorme) over 2 years ago
My first thought was: I like this.
My second thought was: this is frozen strings all over again. People were adding
.freeze to all their strings, but the core team (or matz?) considered this was "not the ruby way", and frozen_string_literals was introduced as a countermeasure.
Updated by byroot (Jean Boussier) over 2 years ago
Updated by nobu (Nobuyoshi Nakada) over 2 years ago
byroot (Jean Boussier) wrote:
Or to avoid allocating hashes and arrays in pattern matching:
case value in { foo: 42 }.freeze # ... end
This is not a literal hash, so unrelated at all.
Updated by byroot (Jean Boussier) over 2 years ago
This is not a literal hash, so unrelated at all.
My bad, I totally misread the disassembly output.
Updated by Eregon (Benoit Daloze) over 2 years ago
- Related to Feature #15393: Add compilation flags to freeze Array and Hash literals added
Also available in: Atom PDF
|
https://bugs.ruby-lang.org/issues/16600
|
CC-MAIN-2022-21
|
refinedweb
| 860
| 72.26
|
The QThreadStorage class provides per-thread data storage. More...
#include <QThreadStorage>
Note: All the functions in this class are thread-safe.
The QThreadStorage class provides per-thread data storage.
QThreadStorage is a template class that provides per-thread data storage.
Note that due to compiler limitations, QThreadStorage can only store pointers.
The setLocalData() function stores a single thread-specific value for the calling thread. The data can be accessed later using localData(). also useful for lazy initializiation.) { if (!caches.hasLocalData()) caches.setLocalData(new QCache<QString, SomeClass>); caches.localData()->insert(key, object); } void removeFromCache(const QString &key) { if (!caches.hasLocalData()) return; caches.localData()->remove(key); }
See also QThread.
Constructs a new per-thread data storage object.
Destroys the per-thread data storage object.
Note: The per-thread data stored is not deleted. Any data left in QThreadStorage is leaked. Make sure that all threads using QThreadStorage have exited before deleting the QThreadStorage.
See also hasLocalData().
Returns true if the calling thread has non-zero data available; otherwise returns false.
See also localData().
Returns a reference to the data that was set by the calling thread.
Note: QThreadStorage can only store pointers. This function returns a reference to the pointer that was set by the calling thread. The value of this reference is 0 if no data was set by the calling thread,
See also setLocalData() and hasLocalData().
This is an overloaded member function, provided for convenience.
Returns a copy of the data that was set by the calling thread.
Note: QThreadStorage can only store pointers. This function returns a pointer to the data that was set by the calling thread. If no data was set by the calling thread, this function returns 0.
See also hasLocalData().
Sets the local data for the calling thread to data. It can be accessed later using the localData() functions.().
|
https://doc.qt.io/archives/qtopia4.3/qthreadstorage.html
|
CC-MAIN-2021-21
|
refinedweb
| 304
| 54.08
|
Authors: George Leontiev, Eugene Burmako, Jason Zaugg, Adriaan Moors, Paul Phillips, Oron Port, Miles Sabin
Supervisor and advisor: Adriaan Moors
History
Introduction
Singleton types, types which have a unique inhabitant, have been a fundamental component of Scala’s semantics dating back to the earliest published work on its type system. They are ubiquitous in ordinary Scala code, typically as the types of Scala object definitions understood as modules, where their role is to give the meaning of paths selecting types and terms from nested values. Selector paths have an intuitive meaning to programmers from a wide range of backgrounds which belies their underpinning by a somewhat “advanced” concept in type theory.
Nevertheless, by pairing a type with it’s unique inhabitant, singleton types bridge the gap between types and values, and their presence in Scala has over the years allowed Scala programmers to explore techniques which would typically only be available in languages, such as Agda or Idris, with support for full-spectrum dependent types.
Scala’s semantics have up until now been richer than its syntax. The only singleton types which are
currently directly expressible are those of the form
p.type where
p is a path pointing to a
value of some subtype of
AnyRef. Internally the Scala compiler also represents singleton types for
individual values of subtypes of
AnyVal, such as
Int or values of type
String which don’t
correspond to paths. These types are inferred in some circumstances, notably as the types of
final
vals. Their primary purpose has been to represent compile time constants (see 6.24 Constant
Expressions
and the discussion of “constant value definitions” in 4.1 Value Declarations and
Definitions).
The types here correspond to literal values (ie. values which programmers can directly write as
terms, see 1.3
Literals) such as
23,
true or
"foo" of the larger non-singleton types they inhabit (
Int,
Boolean or
String
respectively). However, there is no surface syntax to express these types.
As we will see in the motivation section of the proposal below, singleton types corresponding to literal values (henceforth literal types) have many important uses and are already widely used in many important Scala libraries. The lack of first class syntax for literal types has forced library authors to use the experimental Scala macro system to provide a means to express them. Whilst this has proved to be extremely successful, it has poor ergonomics (although this can typically be hidden from library users) and is not portable – because Scala macros in general and the mechanisms used to expose literal types to Scala programmes in particular depend on internal implementation details of the current Scala compiler.
The poor ergonomics of macro-based exposure of literal types was the original motivation for this SIP. The development of Dotty and other Scala dialects since then has made the portability issue more urgent.
Implementation status
Literal types have been implemented in both Typelevel Scala and Dotty.
There has been a great deal of useful experience with the Typelevel Scala implementation in a variety of projects which has resulted in several improvements which are incorporated in the latest iteration of this document. A full implementation of this proposal exists as a pull request relative to the 2.13.x branch of the Lightbend Scala compiler.
Proposal
Proposal summary
- Literals can now appear in type position, designating the corresponding singleton type.
val one: 1 = 1 // val declaration def foo(x: 1): Option[1] = Some(x) // param type, type arg def bar[T <: 1](t: T): T = t // type parameter bound foo(1: 1) // type ascription
- The
.typesingleton type forming operator can be applied to values of all subtypes of
Any.
def foo[T](t: T): t.type = t foo(23) // result is 23: 23
- The presence of an upper bound of
Singletonon a formal type parameter indicates that singleton types should be inferred for type parameters at call sites.
def wide[T](t: T): T = t wide(13) // result is 13: Int def narrow[T <: Singleton](t: T): T = t narrow(23) // result is 23: 23
- Pattern matching against literal types and
isInstanceOf/
asInstanceOftests/conversions are implemented via equality/identity tests of the corresponding values.
(1: Any) match { case one: 1 => true case _ => false } // result is true (1: Any).isInstanceOf[1] // result is true (1: Any).asInstanceOf[1] // result is 1: 1 (1: Any).asInstanceOf[2] // ClassCastException
- A
scala.ValueOf[T]type class and corresponding
scala.Predef.valueOf[T]operator has been added yielding the unique value of types with a single inhabitant.
def foo[T](implicit v: ValueOf[T]): T = v.value foo[13] // result is 13: 13
Motivating examples
Many of the examples below use primitives provided by the Scala generic programming library
shapeless. It provides a
Witness type class and a
family of Scala macro based methods and conversions for working with singleton types and shifting
from the value to the type level and vice versa. One of the goals of this SIP is to enable Scala
programmers to achieve similar results without having to rely on a third party library or fragile
and non-portable macros.
The relevant parts of shapeless are excerpted in Appendix 1. Given the definitions there, some of forms summarized above can be expressed in current Scala,
val wOne = Witness(1) val one: wOne.T = wOne.value // wOne.T is the type 1 // wOne.value is 1: 1 def foo[T](implicit w: Witness[T]): w.T = w.value foo[wOne.T] // result is 1: 1 "foo" ->> 23 // shapeless record field constructor // result type is FieldType["foo", Int]
The syntax is awkward and hiding it from library users is challenging. Nevertheless they enable many constructs which have proven valuable in practice.
shapeless records
shapeless models records as HLists (essentially nested pairs) of record values with their types tagged with the singleton types of their keys. The library provides user friendly mechanisms for constructing record values, however it is extremely laborious to express the corresponding types. Consider the following record value,
val book = ("author" ->> "Benjamin Pierce") :: ("title" ->> "Types and Programming Languages") :: ("id" ->> 262162091) :: ("price" ->> 44.11) :: HNil
Using shapeless and current Scala the following would be required to give
book an explicit type
annotation,
val wAuthor = Witness("author") val wTitle = Witness("title") val wId = Witness("id") val wPrice = Witness("price") type Book = (wAuthor.T ->> String) :: (wTitle.T ->> String) :: (wId.T ->> Int) :: (wPrice.T ->> Double) :: HNil val book: Book = ("author" ->> "Benjamin Pierce") :: ("title" ->> "Types and Programming Languages") :: ("id" ->> 262162091) :: ("price" ->> 44.11) :: HNil
Notice that here the
val definitions are essential – they are needed to create the stable path
required for selection of the member types
T.
Under this proposal we can express the record type directly,
type Book = ("author" ->> String) :: ("title" ->> String) :: ("id" ->> Int) :: ("price" ->> Double) :: HNil val book: Book = ("author" ->> "Benjamin Pierce") :: ("title" ->> "Types and Programming Languages") :: ("id" ->> 262162091) :: ("price" ->> 44.11) :: HNil
shapeless enables generic programming and type class derivation by providing a mechanism for mapping
a value of a standard Scala algebraic data type onto a sum of products representation type,
essentially nested labelled
Either’s of the records discussed above. Techniques of this sort are
widely used, and removing the incidental complexity that comes with encoding via macros will improve
the experience for many users across a wide variety of domains.
refined and singleton-ops
refined and
singleton-ops are two libraries which build on
shapeless’s
Witness to support refinement types for Scala. A refinement is a type-level predictate
which constrains a set of values relative to some base type, for example, the type of integers
greater thar 5.
refined allows such types to be expressed in Scala using shapeless’s
Witness,
val w5 = Witness(5) val a: Int Refined Greater[w5.T] = 10 // Since every value greater than 5 is also greater than 4, // `a` can be ascribed the type Int Refined Greater[w4.T]: val w4 = Witness(4) val b: Int Refined Greater[w4.T] = a // An unsound ascription leads to a compile error: val w6 = Witness(6) val c: Int Refined Greater[w6.T] = a //<console>:23: error: type mismatch (invalid inference): // Greater[Int(5)] does not imply // Greater[Int(6)] // val c: Int Refined Greater[W.`6`.T] = a ^
Under this proposal we can express these refinements much more succinctly,
val a: Int Refined Greater[5] = 10 val b: Int Refined Greater[4] = a
Type level predicates of this kind have proved to be useful in practice and are supported by modules of a number of important libraries.
Experience with those libraries has led to a desire to compute directly over singleton types, in effect to lift whole term-level expressions to the type-level which has resulted in the development of the singleton-ops library. singleton-ops is built with Typelevel Scala which allows it to use literal types as discussed in this SIP.
import singleton.ops._ class MyVec[L] { def doubleSize = new MyVec[2 * L] def nSize[N] = new MyVec[N * L] def getLength(implicit length : SafeInt[L]) : Int = length } object MyVec { implicit def apply[L] (implicit check : Require[L > 0]) : MyVec[L] = new MyVec[L]() } val myVec : MyVec[10] = MyVec[4 + 1].doubleSize val myBadVec = MyVec[-1] //fails compilation, as required
singleton-ops is used by a number of libraries, most notably our next motivating example, Libra.
Libra
Libra is a a dimensional analysis library based on shapeless, spire and singleton-ops. It support SI units at the type level for all numeric types. Like singleton-ops Libra is built using Typelevel Scala and so is able to use literal types as discussed in this SIP.
Libra allows numeric computations to be checked for dimensional correctness as follows,
import spire.implicits._ // import spire.implicits._ import libra._, libra.si._ // import libra._ // import libra.si._ (3.m + 2.m).show // res0: String = 5 m [L] (3.m * 2.m).show // res1: String = 6 m^2 [L^2] (1.0.km.to[Metre] + 2.0.m + 3.0.mm.to[Metre]).show // res2: String = 1002.003 m [L] (3.0.s.to[Millisecond] / 3.0.ms).show // res3: String = 1000.0 [] 3.m + 2.kg //this should fail // <console>:22: error: These quantities can't be added! // 3.m + 2.kg //this should fail // ^
Spire
The Scala numeric library Spire provides us with another example where it is useful to be able to use literal types as a constraint.
Spire has an open issue to add a
Residue type to model modular arithmetic. An implementation might
look something like this,
case class Residue[M <: Int](n: Int) extends AnyVal { def +(rhs: Residue[M])(implicit m: ValueOf[M]): Residue[M] = Residue((this.n + rhs.n) % valueOf[M]) }
Given this definition we can work with modular numbers without any danger of mixing numbers with different modulii,
val fiveModTen = Residue[10](5) val nineModTen = Residue[10](9) fiveModTen + nineModTen // OK == Residue[10](4) val fourModEleven = Residue[11](4) fiveModTen + fourModEleven // compiler error: type mismatch; // found : Residue[11] // required: Residue[10]
Also note that the use of
ValueOf as an implicit argument of
+ means that the modulus does not
need to be stored along with the
Int in the
Residue value which could be beneficial in
applications which work with large datasets.
Proposal details
Literals can now appear in type position, designating the corresponding singleton type. The
SimpleTypeproduction is extended to include syntactic literals.
SimpleType ::= SimpleType TypeArgs | SimpleType ‘#’ id | StableId | Path ‘.’ ‘type’ | Literal | ‘(’ Types ‘)’
Examples,
val one: 1 = 1 // val declaration def foo(x: 1): Option[1] = Some(x) // param type, type arg def bar[T <: 1](t: T): T = t // type parameter bound foo(1: 1) // type ascription
The restriction that the singleton type forming operator
.typecan only be appended to stable paths designating a value which conforms to
AnyRefis dropped – the path may now conform to
Any. Section 3.2.1 of the SLS is updated as follows,
Singleton Types
SimpleType ::= Path ‘.’ ‘type’
A singleton type is of the form
p.type. Where
pis a path pointing to a value which conforms to
scala.AnyRef, the type denotes the set of values consisting of
nulland the value denoted by
p(i.e., the value
vfor which
v eq p). Where the path does not conform to
scala.AnyRefthe type denotes the set consisting of only the value denoted by
p.
Example,
def foo[T](t: T): t.type = t foo(23) // result is 23: 23
The presence of an upper bound of
Singletonon a formal type parameter indicates that singleton types should be inferred for type parameters at call sites.
This SIP aims to leave the meaning of all currently valid programmes unchanged, which entails that it must not alter type inference in currently valid programmes. Current Scala will generally widen the types of literal values from their singleton type to their natural non-singleton type when they occur in expressions. For example in,
val foo = 23 def id[T](t: T): T = t id(23)
we expect the inferred type of
fooand the actual type parameter inferred for
Tin the application of
idto both be
Intrather than
23. This behaviour seems to be natural and appropriate in circumstances where the programmer is not deliberately working with singleton types.
With the introduction of literal types, however, we do want to be able to infer singleton types in cases such as these,
case class Show[T](val s: String) object Show { implicit val showTrue: Show[true] = Show[true]("yes") implicit val showFalse: Show[false] = Show[false]("no") } def show[T](t: T)(implicit st: Show[T]): String = st.s show(true) // currently rejected
The proposal in this SIP is that we use an upper bound of
Singletonon a formal type parameter to indicate that a singleton type should be inferred. The above example would then be written as,
def show[T <: Singleton](t: T) (implicit st: Show[T]): String = st.s show(true) // compiles and yields "yes"
This change will not affect the meaning of currently valid programmes, because the widened types inferred for literal values at call sites do not currently conform to
Singleton, hence all call sites of the form in the above example would currently be rejected as invalid.
Whilst type inference in Scala is not fully specified, section 6.26.4 Local Type Inference contains language which explicitly excludes the inference of singleton types (see cases 2 and 3, “None of the inferred types Ti is a singleton type”). This does not match the current Scala or Dotty compiler implementations, where singleton types are inferred where a definition is final, specifically non-lazy final val definitions and object definitions. Where a definition has had a singleton type inferred for it, singleton types will be inferred from its uses,
final val narrow = 23 // inferred type of narrow: 23 class Widen object Narrow extends Wide def id[T](t: T): T = t id(Narrow) // result is Narrow: Narrow.type
This SIP updates the specification to match the current implementation and then adds the further refinement that an explict upper bound of
Singletonindicates that a singleton type should be inferred.
Given,
A singleton-apt definition is
- An object definition, or
- A non-lazy final val definition
the relevant clauses of 6.26.4 are revised as follows,
None of the inferred types Ti is a singleton type unless, (1) Ti is a singleton type corresponding to a singleton-apt definition, or (2) The upper bound Ui of Ti conforms to
Singleton.
Example,
def wide[T](t: T): T = t wide(13) // result is 13: Int def narrow[T <: Singleton](t: T): T = t narrow(23) // result is 23: 23
A
scala.ValueOf[T]type class and corresponding
scala.Predef.valueOf[T]operator has been added yielding the unique value of types with a single inhabitant.
Type inference allows us to infer a singleton type from a literal value. It is natural to want to be able to go in the other direction and infer a value from a singleton type. This latter capability was exploited in the motivating
Residueexample given earlier, and is widely relied on in current Scala in uses of shapeless’s records, and
LabelledGenericbased type class derivation.
Implicit resolution is Scala’s mechanism for inferring values from types and in current Scala shapeless provides a macro-based materializer for instances of its
Witnesstype class. This SIP adds a directly compiler supported type class as a replacement,
final class ValueOf[T](val value: T) extends AnyVal
Instances are automatically provided for all types with a single inhabitant, which includes literal and non-literal singleton types and
Unit.
Example,
def foo[T](implicit v: ValueOf[T]): T = v.value foo[13] // result is 13: 13
A method
valueOfis also added to
scala.Predefanalogously to existing operators such as
classOf,
typeOfetc.
def valueOf[T](implicit vt: ValueOf[T]): T = vt.value
Example,
object Foo valueOf[Foo.type] // result is Foo: Foo.type valueOf[23] // result is 23: 23
Pattern matching against literal types and
isInstanceOf/
asInstanceOftests/conversions are implemented via equality/identity tests of the corresponding values.
Pattern matching against typed patterns (see 8.1.2 Typed Patterns) where the
TypePatis a literal type is translated as a match against the subsuming non-singleton type followed by an equality test with the value corresponding to the literal type.
Where applied to literal types
isInstanceOfand
asInstanceOfare translated to a test against the subsuming non-singleton type and an equality test with the value corresponding to the literal type. In the case of
asInstanceOfa
ClassCastExceptionis thrown if the test fails.
Examples,
(1: Any) match { case one: 1 => true case _ => false } // result is true (1: Any).isInstanceOf[1] // result is true (1: Any).asInstanceOf[1] // result is 1: 1 (1: Any).asInstanceOf[2] // ClassCastException
Default initialization for vars with literal types is forbidden.
The default initializer for a var is already mandated to be it’s natural zero element (
0,
false,
nulletc.). This is inconsistent with the var being given a non-zero literal type,
var bad: 1 = _
Whilst we could, in principle, provide an implicit non-default initializer for cases such as these it is the view of the authors of this SIP that there is nothing to be gained from enabling this construction and that default initializer should be forbidden.
Follow on work from this SIP
Whilst the authors of this SIP believe that it stands on its own merits, we think that there are two areas where follow on work is desirable, and one area where another SIP might improve the implementation of SIP-23.
Infix and prefix types
SIP-33 Match Infix and Prefix Types to Meet Expression Rules has emerged from the work on refined types and computation over singleton types mentioned in the motivation section above.
Once literal types are available it is natural to want to lift entire expressions to the type level as is done already in libraries such as singleton-ops. However, the precedence and associativity of symbolic infix type constructors don’t match the precedence and associativity of symbolic infix value operators, and prefix type constructors don’t exist at all. It would be valuable to continue the process of aligning the form of the types and terms.
Byte and short literals
Byte and
Short have singleton types, but lack any corresponding syntax either at the type or at the term level.
These types are important in libraries which deal with low level numerics and protocol implementation
(see eg. Spire and Scodec) and
elsewhere, and the ability to, for instance, index a type class by a byte or short literal would be
valuable.
A prototype of this syntax extension existed at an early stage in the development of Typelevel Scala but never matured. The possibility of useful literal types adds impetus.
Opaque types
In the reference implementation of SIP-23, the
ValueOf[A] type is implemented as a value class.
This means that implicit evidence of
ValueOf[A] will erase to
A (the value associated with
the singleton types). This is desirable, but due to value class restrictions, ends up boxing
primitive types (such as
Int).
If we implemented
ValueOf[A] as an opaque type instead of a value class, then this boxing
would be ellided, and the
valueOf[A] method would be compiled to an identity function.
Related Scala issues resolved by the literal types implementation
- SI-1273 Singleton type has wrong bounds
- SI-5103 Singleton types not inferred in all places they should be
- SI-8323 Duplicate method name & signature with singleton type parameters over constant types
- SI-8564 Methods with ConstantType results get the inhabitant of ConstantType as their body
Appendix 1 – shapeless excerpts
Extracts from shapeless relevant to the motivating examples for this SIP,
trait Witness { type T // the singleton type represented by this Witness val value: T {} // the unique inhabitant of that type } object Witness extends Dynamic { type Aux[T0] = Witness { type T = T0 } type Lt[Lub] = Witness { type T <: Lub } /** * Materialize the Witness for singleton type T */ implicit def apply[T]: Witness.Aux[T] = macro ... /** * Convert a literal value to its Witness */ implicit def apply[T](t: T): Witness.Lt[T] = macro ... } object labelled { /** * The type of fields with keys of singleton type `K` and value type `V`. */ type FieldType[K, +V] = V with KeyTag[K, V] trait KeyTag[K, +V] /** * Yields a result encoding the supplied value with the singleton type `K' as its key. */ def field[K] = new FieldBuilder[K] class FieldBuilder[K] { def apply[V](v : V): FieldType[K, V] = v.asInstanceOf[FieldType[K, V]] } } object singleton { implicit def mkSingletonOps(t: Any): SingletonOps = macro ... } trait SingletonOps { import labelled._ type T /** * Returns a Witness of the singleton type of this value. */ val witness: Witness.Aux[T] /** * Narrows this value to its singleton type. */ def narrow: T {} = witness.value /** * Returns the provided value tagged with the singleton type * of this value as its key in a record-like structure. */ def ->>[V](v: V): FieldType[T, V] = field[T](v) }
|
https://docs.scala-lang.org/sips/42.type.html
|
CC-MAIN-2018-47
|
refinedweb
| 3,661
| 51.58
|
All,
(Also email copy.)
> I'd like to use the 'findproc'() function from within a user process
> to dynamically obtain the "endpoint" of various servers.
>
> Though I "#include
", I'm still getting a compilation warning
> that implies the include file (with the prototype) is not, in fact,
> being read. Which include file(s) need to be referenced?
Do you have a findproc()? It was renamed to _pm_findproc() a while
ago so as to not pollute application namespace. Also, you may have to
define _MINIX to get the prototype included.
> Also, what it the difference between 'findproc'()'s return value (an
> int), and the second param (int *)?
The first is the success status, the 2nd is the actual result.
> Finally, which library do I need to reference in my Makefile so that
> the function is properly "bound" into my program?
libc, so no special action required.
=Ben
|
http://fixunix.com/minix/28459-using-findproc.html
|
CC-MAIN-2014-49
|
refinedweb
| 146
| 66.84
|
Last updated on 2014-09-05
Previous Tutorial: Creating a GEF Editor – Part 4: Showing the Model on the Editor
Hi everyone. In this tutorial we will learn how to load the model from an EMF file (Seemed easy but took me some searching to find how this is done, and I’m not sure I’m doing it the best way it could be done), and in the way we’ll also be expanding our model definition to match our requirements. If you didn’t generate your EMF editor code, please do so now, because we’ll be using it to bootstrap our EMF file before we open it using the GEF editor. So let’s get started.
- In the previous tutorial, we used a mock model hard-coded into our diagram, and we also gave the
OPMObjectfigures random locations in our diagram. We’ll fix this in two steps, first by expanding the model’s definition and second by loading the model data from a file.
- Now we will add constraints information to the
OPMThingmodel class. This requires three steps: first, we need to add the
org.eclipse.draw2dplugin as a required dependency of the
com.vainolo.phd.opm.modelproject to allow us to use the classes in this library in our model (an example how to add new dependencies is was shown in the 3rd tutorial). Second, we define a new
EData Type, which is the way to connect Ecore models with existing data types outside the model. And third, the constraints attribute will be added to the
OPMThingmodel class.
- Right-click on the package node of the
opm.ecoremodel and select “New Child”->”EData Type”. This will add a new entry to the package node as shown below:
As usual, the properties of this node are edited in the properties view of the eclipse framework, so if they are not already open double click on the new node to open the properties editor. The data type we are defining is a connection to the
Rectanglethat is defined in
org.eclipse.draw2d.geometrypackage, so we’ll name our new data type “Rectangle” and will set the “Instance Type Name” property to
org.eclipse.draw2d.geometry.Rectangle.
- Now add a new attribute to the
OPMObjectclass called “constraints” and set its
ETypeto
Rectangle(if you forgot how to do this, go to the first tutorial for a short reminder). Your model should now look like this:
- Save the Ecore model and generate all code from the genmodel file. This is good but we are still missing one piece in the puzzle. Although the EMF framework know how to reference the
Rectangleclass, it does not know how to serialize this class to a
String, which is one of the thing that EMF has to do. Therefore we must provide one implementation ourselves by editing the code generated by the EMF framework. You are probably saying “Hey, you’ll edit the EMF code and next time you generate the model code the core you wrote will be erased and you will have to write it again! This is a nightmare!!!”. But no. See, EMF code generation is very smart. The code generated by the EMF framework contains comment annotations (that is, annotations that are part of the comments of a class/method/type) that can be read by the framework before new code is generated. In general, all generated code is annotated with the
@generatedannotation, so if we want to change the generated code, we simply append “NOT” after the comment and in the next time code is generated, this function will not be overwritten by the code generator. Nice, ah? So lets get going. Open the
package com.vainolo.phd.opm.model.impl.OPMFactoryImplfound in the
com.vainolo.phd.opm.modelproject. There are two functions that must be re-written:
createRectangleFromStringand
convertRectangleToString. I decided to represent a rectangle as a comma separated lists of values: “x,y,width,height”, as shown in the code below:
/** * <!-- begin-user-doc --> * Create a <code>Rectangle</code> instance from a <code>String</code>. The expected * representation is "x,y,width,height". Illegal representations will return a null * value. * <!-- end-user-doc --> * @generated NOT */ public Rectangle createRectangleFromString(EDataType eDataType, String initialValue) { if(initialValue == null) { return null; } initialValue.replaceAll("\\s", ""); String[] values = initialValue.split(","); if(values.length != 4) { return null; } Rectangle rect = new Rectangle(); try { rect.setLocation(Integer.parseInt(values[0]), Integer.parseInt(values[1])); rect.setSize(Integer.parseInt(values[2]), Integer.parseInt(values[3])); } catch(NumberFormatException e) { EcorePlugin.INSTANCE.log(e); rect = null; } return rect; } /** * <!-- begin-user-doc --> * Convert a <code>Rectangle</code> to a <code>String</code> representation. The * <code>Rectangle</code> is represented as "x,y,width,heigth". * <!-- end-user-doc --> * @generated NOT */ public String convertRectangleToString(EDataType eDataType, Object instanceValue) { if(instanceValue == null) { return null; } Rectangle rect = (Rectangle) instanceValue; return rect.x+","+rect.y+","+rect.width+","+rect.height; }
Please save all of your work. I you want, you can check that the code generation does work as expected by re-generating the code and checking that the
OPMFactoryImplclass still contains the code marked by
@generated NOT.
- We are done with the model, now we must load the model file into our diagram editor. For this, we’ll override the
GraphicalEditor.initmethod, which is called shortly after the
EditPartis instantiated, and is provided by the eclipse workbench with an instance of an
IEditorInputclass from where we can fetch our model. But before we do this, we must add two more plug-in dependencies to our project:
org.eclipse.ui.ideand
org.eclipse.core.resources. So add the dependencies and override the
GraphicalEditor.initwith the following code:
@Override public void init(IEditorSite site, IEditorInput input) throws PartInitException { super.init(site, input); OPMPackage.eINSTANCE.eClass(); // This initializes the OPMPackage singleton implementation. ResourceSet resourceSet = new ResourceSetImpl(); if(input instanceof IFileEditorInput) { IFileEditorInput fileInput = (IFileEditorInput) input; IFile file = fileInput.getFile(); opdResource = resourceSet.createResource(URI.createURI(file.getLocationURI().toString())); try { opdResource.load(null); opd = (ObjectProcessDiagram) opdResource.getContents().get(0); } catch(IOException e) { // TODO do something smarter. e.printStackTrace(); opdResource = null; } } }
We have also added two class fields to hold the model and the resource from where the model was loaded:
private Resource opdResource; private ObjectProcessDiagram opd;
Changed the source of contents of the diagram (in the
initializeGraphicalViewermethod):
@Override protected void initializeGraphicalViewer() { super.initializeGraphicalViewer(); getGraphicalViewer().setContents(opd); }
and of course, added lots of imports:
import java.io.IOException; import org.eclipse.core.resources.IFile; import org.eclipse.core.runtime.IProgressMonitor; import org.eclipse.emf.common.util.URI; import org.eclipse.emf.ecore.resource.Resource; import org.eclipse.emf.ecore.resource.ResourceSet; import org.eclipse.emf.ecore.resource.impl.ResourceSetImpl; import org.eclipse.gef.DefaultEditDomain; import org.eclipse.gef.palette.PaletteRoot; import org.eclipse.gef.ui.parts.GraphicalEditorWithFlyoutPalette; import org.eclipse.ui.IEditorInput; import org.eclipse.ui.IEditorSite; import org.eclipse.ui.IFileEditorInput; import org.eclipse.ui.PartInitException; import com.vainolo.phd.opm.gef.editor.part.OPMEditPartFactory; import com.vainolo.phd.opm.gef.utils.OPMModelUtils; import com.vainolo.phd.opm.model.OPMPackage; import com.vainolo.phd.opm.model.ObjectProcessDiagram;
After all this, your code should now compile with no errors.
- Last thing we need to do is read the constraints information from our model into our figures. Open the
OPMObjectEditPartand replace the
refreshVisualsmethod with the following code:
@Override protected void refreshVisuals() { OPMObjectFigure figure = (OPMObjectFigure)getFigure(); OPMObject model = (OPMObject)getModel(); ObjectProcessDiagramEditPart parent = (ObjectProcessDiagramEditPart) getParent(); figure.getLabel().setText(model.getName()); Rectangle layout = new Rectangle(model.getConstraints().x, model.getConstraints().y, model.getConstraints().width, model.getConstraints().height); parent.setLayoutConstraint(this, figure, layout); }
Good. But before we can see our
OPMObjects in our diagram, we must define them constraints. We’ll be doing this in the “OPM Model Editor” that is automatically generated by the EMF framework. So please execute the project, right-click on the model source file (in my case “TheBestOPMModel.opm”), and select “Open With”->”OPM Model Editor” (ignore problems that may occur while initializing the “OPM GEF Editor” and close it if it is open. They are probably caused by a malformed input file that does not contain yet the constraints field):
This model editor looks very similar to the Ecore editor, and this is because they are both based on the same framework, so you should already know how to work with it. Navigate to the “Object Process Diagram” node on the editor, right-click and select “New Child”->”Object”:
Now fill up the Object’s properties (in the properties view). I set them as follows:
- Name: O1
- Constraints: 50,50,50,50
You can add more objects using the EMF editor and see the result (note that the diagram does not refresh automatically so you must close it and open it again to see the changes in the model. Refreshing the editor is something that will be handled in a future tutorial
The final eclipse project files can be downloaded here.
That’s all for today. If you have any problems, don’t hesitate to leave a comment. And thank you for your visit.
Next Tutorial: Creating a GEF Editor – Part 6: Model Refactoring and Editing Diagram Entities
19 Comments
Dear author,
your articles are really helpful. i am now facing a problem for extending an existing editor. i want the extension to be created in an eclipse fragment. what the minimum change should be done in the editor project? is it possible to just modify the models ecore file and create the view and editpart of the element in another fragment?
yours sincerely,
Simon
Hi Simon. I am not sure I understand what you mean by “eclipse fragment”. Could you please give me a longer explanation?
Thanks
Vainolo
the fragment is another type of plugin. its configure file is fragment.xml. not plugin.xml. it a logical extension of its extended plugin. If you see the TIGER (model transformation) project source code. you can discover some example. But after a few days of studying. it seems not applicable. But its not the question for me now.
Dude, nice thing you got here, but if you don’t explain what the code does, is not a tutorial just a DIY example
I Lucian. From my experience the best way to learn is by reading code that works and does something similar to what you want to do, and then trying to learn why the code works. Since I am also learning GEF at this time, this is what I am doing. I plan on writing some more explanations on the internals of GEF but they are fairly complicated and not until I have full understanding of them I will be able to do this.
But thanks anyway.
Finally found a nice explanation on how GEF works.
You can find it linked in my “useful links” page. Pretty useful but missing LOADS of examples. I’m trying to fill out that gap.
hi,
i also think the mode persistence needs time studying. here is a piece of code for creating a model file.
System.out.println(“create henshin file with name:”+name);
IFile transformationFile = null;
if (!project.getFile(name).equals(null))
i create a model transformation file with post-fix “.henshin”
but the xmi file is encoded ascii, how to change it to utf-8?
should i change some thing here? or should i change something during creating the resources?
do you have any suggestions?
yours sincerely
Simon
that’s not a problem either now. i have to set a encoding in the save options.
like:
Map options = new HashMap();
options.put(XMLResource.OPTION_ENCODING,”utf-8″);
transformationResource.save(options);
Hi,
Thanks you for your your tutorial, it is very interesting.
I try this part 5 first time with the OPM model, the second time I try it with a real model. I arrive in the Step 7 :
“Open With”->”OPM Model Editor”
But I can not have the OPM Model Editor, I have just the GEF Editor, I dont know what is missing. Can you tell me what the “Model Editor” depends on, what to to for having this Model Editor in the list choice editor ?
Thanks for your help !!
What do you mean “with a real model”? The editor only works with OPM models. This is defined in the plug-in’s description, when it tells that only files ending with “.opm” are opened by the editor.
In fact I’ve need to try for my company needs on a real models prototype of the company. I have some problems but it is ok know, thanks for your answer.
Hey ADMIN,
I just want to say that I really appreciate these tutorials. Specially that you have attached the working code for each tutorial.
Thank you for sharing your knowledge.
This tutorial series in general does an excellent job of walking through the mechanics of GEF, but it does not follow some of the best practices for MVC, in particular separating model data from view data. In this specific installment of the tutorial, view-specific information (e.g. the rectangle data) is being put into the model. This makes it impossible, for example, to have multiple views of the same model element, as each would have different rectangle information.
Hi, and thanks for the comment.
What you say regarding the view/model is more complicated. In my case, part of my model is the location of the figure. Furthermore, the really correct implementation would need two models, one for the view and one for the data. But this is too much work, and frankly not worth the effort.
Hi,
I’m suffer a problem that I don’t recognize if it’s an IDE bug or it’s my mistake. My eclipse Luna (4.40RC3) can’t find the IFileEditorInput. However, if I press the shortcout CTRL + SHIFT + T eclipse can open it. WTF? I’ve added correctly the new dependency to the manifest plus other possible dependencies (org.eclipse.ui, org.eclipse.ui.ide, etc.). And I’ve also tried to add directly the .jar as a project-dependency (in the build path) and always appears the mark of “cannot resolve”.
I’ve googled also but I can’t find a solution that would be OK for me. Could you give me some idea about the problem? How could I resolve it?
Thanks
Hi Jose. This sometimes happened to me also, where dependencies that I know exist suddenly aren’t loaded. Sometimes restarting eclipse works. I you are using Maven, sometimes the problem is in the definitions there. Nothing of much help that I can say here.
Hi ,
I have downloaded your project of part 5 and in the new workspace while opening opm file with OPM model editor i am getting following exception :
“org.eclipse.emf.ecore.resource.impl.ResourceSetImpl$1DiagnosticWrappedException: org.xml.sax.SAXParseExceptionpublicId: platform:/resource/ssssss/eeeeee.opm; systemId: platform:/resource/ssssss/eeeeee.opm; lineNumber: 1; columnNumber: 1; Premature end of file.”
While debugging i have found that in OPMGraphicalEditor this issue is coming at following lines
opdResource.load(null);
opd =(ObjectProcessDiagram)opdResource.getContents().get(0);
Thanks&Regards
Rahul Kumar Upadhyay
There is a problem parsing the model file you are using. This can be caused by changes I do all the time to the model. That’s the best I can help.
|
https://vainolo.com/2011/06/22/creating-a-gef-editor-%E2%80%93-part-5-loading-the-model-from-an-emf-file/
|
CC-MAIN-2021-43
|
refinedweb
| 2,546
| 50.12
|
Best of Modern JavaScript — Methods, IIFEs, and this
Since 2015, JavaScript has improved immensely.
It’s much more pleasant to use it now than ever.
In this article, we’ll look at the spread operator and functions in JavaScript.
Method Definitions Syntax for Methods
We should use the method definition syntax for methods.
For example, we can write:
const obj = {
foo() {},
bar() {}
}
to define methods in
obj .
This is the same as:
const obj = {
foo: function() {},
bar: function() {}
}
If we don’t need the value of
this , we can also write:
const obj = {
foo: () => {},
bar: () => {}
}
We used arrow functions so that we don’t have to worry about the value of
this in the function.
Avoid IIFEs in ES6
We don’t really need IIFEs in ES6 or later.
The most common use of IIFEs is to define private variables that are only available within a function.
In ES5, we have something like:
(function() {
var tmp = 'foo';
//...
}());
In ES6, we can just define
tmp within a block:
{
let tmp = 'foo';
//...
}
We also used to use IIFEs as modules.
For instance, we may write something like:
var module = (function() {
var foo = 0; function bar(x) {
foo++;
//...
} return {
bar: bar
};
}());
We return an object with the public properties so that we can use them elsewhere.
With ES6, we don’t need this anymore since we have native modules.
For example, we can just write:
module.js
let foo = 0;export function bar(x) {
foo++;
//...
}
We just create a module file and use
export to export what we want from it.
Then we can use it by importing the function.
For example, we can write:
import { foo } from './module';foo(100);
We can still use IIFEs to immediately invoked arrow functions.
For example, we can write:
const arr = [3, 2, 1];const sorted = (() => {
arr.sort();
return arr.join('');
})();
to sort our array.
The Rules for
this
this is defined differently in various situations.
For traditional standalone functions in strict mode,
this is
undefined .
For traditional standalone functions in sloppy mode,
this is the
window object.
Generator functions, generator methods, and methods work like traditional functions.
Arrow functions always take the value of
this from the function outside of it.
Classes are implicitly strict so we can’t call a class directly.
We’ll get a TypeError if we try to call it directly.
Traditional Functions
Traditional functions are functions that we have from ES5 or earlier.
We can create it as a function expression:
const foo = function(x) {
//...
};
or we can create a function declaration;
function foo(x) {
//...
}
this is
undefined in strict mode and it’s a global object in sloppy mode.
In method calls
this is the receiver of the method call.
It’s the first argument of
call or
apply .
In constructor calls,
this is the newly created instance.
Conclusion
There’re various kinds of functions in ES6.
Also, they all have different values of
this depending on the type of function and location.
|
https://hohanga.medium.com/best-of-modern-javascript-methods-iifes-and-this-8791c1cfcfbd
|
CC-MAIN-2021-25
|
refinedweb
| 494
| 76.11
|
In this article, you will learn about c program to check prime number.
What is prime number ?
A prime number is a natural number that is divisible by 1 and itself only.
For example 2, 3, 5, 7….
Please go through following articles of C programming to understand the logic of the program.
C program to check prime number
#include <stdio.h> int main() { int num, i, j = 0; printf("Enter number: "); scanf("%d", &num); //check for prime number for (i = 1; i <= num; i++) { if ((num % i) == 0) { j++; } } if (j == 2) printf("%d is a prime number.", num); else printf("%d is not a prime number.", num); return 0; }
Output
Explanation
Program logic of the above program to check whether a number is prime or not is simple. We know that prime number is only divisible by 1 and itself.
Hence every time the program flow enters
for loop it checks whether the number is divisible by iteration number.
If the given number is perfectly divided by iteration number then
j is increased by
1. This continues till iteration reached to number itself.
Finally, the value of
j is checked and if it is equal to 2, the result is a prime number.
|
http://www.trytoprogram.com/c-examples/c-program-to-check-prime-number/
|
CC-MAIN-2019-30
|
refinedweb
| 205
| 73.88
|
Wikibooks:Subject pages
The subject pages are an organization and navigational tool here at Wikibooks. We use these pages to group our books together by subject, group sub-topics together into larger topics, and provide an easy hierarchical way to find books from our large collection. Subject pages are the replacement for the older Department/Bookshelf system that was not nearly as expandable or versatile.
Creating subject pages[edit]
There is no single way that Subject pages must look. In fact, it's probably better if subject pages for different topics are unique and specifically tailored for the reading audience. However, some pages in the Subject: namespace were created using templates, and that's okay too. Ideally, all those template-created subject pages will be improved to look better over time.
In general, there are a few things that a subject page might want to have:
- A list of books in that subject. You can use {{CategoryList|CATEGORY NAME}} to produce this automatically
- A list of topic books in other categories, such as Category:Books with print version, Category:Books with PDF version, and Category:Featured books. You can find books that are simultaneously in two categories at once using the {{CategoryJunction|CAT 1|CAT 2}} template.
- A list of books in similar topics, but not in this topic. Use {{CategoryIntersection|CAT 1|CAT 2}} to create these lists
- Links to information on the topic on other Wikimedia projects, such as Wikipedia, Wikiversity, and Wiktionary. Use {{Sisterlinks}} or {{Associated Wikimedia}} for this.
- A link to a parent category. It's important that all subject pages have a parent. This makes them easier to find. To put a subject page into the category of a parent, use the {{subjects}} template.
Feel free to get creative with these lists. The more relevant information that you can present to readers, the better.
Subject casing[edit]
Books and their categories are title cased. Partly to reduce conflicts where possible, Subject pages and their corresponding categories are sentence cased.
History[edit]
Subject pages were implemented in August 2007 and the decision to phase out bookshelves in November 2007.
|
http://en.wikibooks.org/wiki/Wikibooks:Subject_pages
|
CC-MAIN-2014-52
|
refinedweb
| 353
| 56.05
|
A peer-to-peer network infrastructure library.
Project description
Peertable
Welcome to Peertable! This project is an infrastructural peer-to-peer networking library for Python 3. You can also use it in standalone mode, although that is for little use other than connecting existing peers and networks (so they can find each other - but once they do, you don't need that anymore!).
Creating a Bridge Peer
The default (mostly demonstrative) Peertable application will, by default, only log messages given to it, but, like any peer, will, yes, serve to an actual purpose: acting as a bridge between peers, or possibly even networks!
To use the default application, use:
python3 -i -m peertable
Then, insert your listen port, public IP address (preferentially either outside a NAT or port forwarded), and external port (if behind a tunnel of sorts, e.g. ngrok), in order. The external port is optional.
Then, insert a space-separated list of initial peers to connect to (or nothing, if none).
Et voilá! You are now connected, and in the interactive prompt. It might seem like it's idle, but it is connecting to the target peers! And it's sending identificatoin requests (and identifying your server as well, by extension). You can now use the Send Commands:
>>> [c.id for c in s.clients] ['fc4h0MwGznaAqObYDWMXUiAAZ'] >>> for c in s.clients: ... s.send_id(c.id, peertable.Message(True, "TESTMESG", "Hello! If you are reading this, then a human is behind the recipient peer of this message, i.e. YOU! Your ID: " + c.id))
In this example, the other side would see:
>+ + + + + Received message! Sender ID: p8vM6FqyBGzm0Sdnr6XCDoZdy Message type: TESTMESG Payload length: 135 Payload in Base64: SGVsbG8hIElmIHlvdSBhcmUgcmVhZGluZyB0aGlzLCB0aGVuIGEgaHVtYW4gaXMgYmVoaW5kIHRoZSByZWNpcGllbnQgcGVlciBvZiB0aGlzIG1lc3NhZ2UsIGkuZS4gWU9VISBZb3VyIElEOiBmYzRoME13R3puYUFxT2JZRFdNWFVpQUFa - - - - -<
The payload is the message you gave, in the third argument in the call to
peertable.Message, which return value, in turn, is the 2nd argument to the call to
s.send_id.
Using the Library
For a good example, check this out:
import peertable import random import base64 class TestApp(peertable.PeerApplication): def receive(self, server, client, message): print(">+ + + + +\nReceived message!\nSender ID: {}\nMessage type: {}\nPayload length: {}\nPayload in Base64: {}\n- - - - -<".format(client.id, message.message_type, len(message.payload), base64.b64encode(message.payload).decode('utf-8'))) if __name__ == "__main__": print("Insert your new peer's listen port: (defaults to 2912)") port = int(input() or 2912) print() print("Insert your machine's public IP address (so others can connect to you, etc):") my_addr = input() print() print("Insert this server's public port, in case you use a tunnel or port forward (or none otherwise):") my_port = int(input() or port) s = peertable.PeerServer(my_addr, port=port, remote_port=my_port) s.start_loop() s.register_app(TestApp()) print() print("My port: " + str(s.port)) print("My ID: " + str(s.id)) print() print("Insert target IP:port addresses, separated by space:") addrs = input() for addr in addrs.split(' '): try: addr = addr.split(':') if len(addr) < 2: raise ValueError("-----") addr[1] = int(addr[1]) s.connect(tuple(addr)) except ValueError: pass
And before you ask, yes, this is the script that is run when you do
python3 -i -m peertable.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/Peertable/0.1.3.2/
|
CC-MAIN-2021-49
|
refinedweb
| 540
| 57.37
|
Subscriber portal
The .NET API Reference documentation has a new home. Visit the .NET API Browser on docs.microsoft.com to see the new experience.
Gets whether Name has been explicitly set.
Gets whether Namespace has been explicitly set.
Gets or sets a value that indicates whether to preserve object reference data.
Gets whether IsReference has been explicitly set.
Gets or sets the name of the data contract for the type.
Gets or sets the namespace for the data contract for the type.
When implemented in a derived class, gets a unique identifier for this Attribute.(Inherited from Attribute.)
DataContractAttribute ClassSystem.Runtime.Serialization Namespace
|
https://msdn.microsoft.com/en-us/library/system.runtime.serialization.datacontractattribute_properties(v=vs.110).aspx
|
CC-MAIN-2018-13
|
refinedweb
| 104
| 54.59
|
I se…
tlfong01
2618, ,,,
What you can can to execute two sequences of statement at the same time (“in parallel”, or “concurrently”) is to use the python module “multi-processing” (preferred) or “multi-threading” (not preferred)., …
In other words, you still need to know a little bit of MT (but don’t dig too deep) in order to thoroughly understand what the MP guys are talking. Just thinking aloud, sorry for the typos. Happy multi-processing. Cheers.
Ah, I forgot to ive your the important web link: (1) Python 3.8.1 Multiprocessing — Process-based parallelism docs.python.org/3/library/multiprocessing.html.
Abhilash Wakodikar
Can this be done with timer? I don’t know how to do programming of timer in raspberry pi but still asking.
14 hours later…
tlfong01
2618
13:28
@Abhilash Wakodik, Yes, you can do with hardware and software timers. But #jspotla’s trick is the simplest for your application. In case you wish to have a timer solution for general cases, let me see if I can find more references. It might take some time, though. Cheers.
tlfong01
2618
14:02
Now I have some timer functions for your reference. Let me see if I can listed them here.
# ftime73.py tlfong01 2019nov26hkt2022
# Rpi4B buster 2019sep26 python 3.7.3 Thronny 3.2
import inspect
# ********************************************************************************
def delaySeconds(secondsNum):
def oneSecondDelay():
# Rpi4B buster 2019sep26 python 3.7.3 Thronny 3.2
import inspect
from time import sleep
from datetime import datetime
import fprint171 as fprint
# ********************************************************************************
# ********************************************************************************
# *** Date Time Functions ***
def delaySeconds(secondsNum):
sleep(secondsNum)
return
def oneSecondDelay():
delaySeconds(1)
return
tlfong01
2618
14:20
I don’t know how to format my program listing. In case you find the above listing messed up, or cannot copy and paste to test run, you may try the following penzu file: penzu.com/p/739e7acd. The main thing is to import the python module “datetime”, then you can use the “elapsed time” thing to time two events.
There are two kinds of timers; (1) Counting up as a stop watch, to find elapsed time between tow events (a) start, (b), (2) Counting down as an alarm clock, where you give a start time, and let it count down to zero, as raise an alarm, to let Rpi switch off a LED etc. My file contains only counting up, you need to fiddle a bit to turn them to count down, by using a while loop, or “interrupts” which are more advanced, and not recommended to newbies. Happy programming. Cheers.
By the way, python multi-processing is not that hard as you expected. If you read the tutorial I suggested, or more other tutorials, and found them more suitable for you applications, then you may like to let me know you situation, and perhaps I can give some further suggestions.
Categories: Uncategorized
|
https://tlfong01.blog/2020/02/09/python-parallel-programming-notes-2/
|
CC-MAIN-2020-29
|
refinedweb
| 472
| 69.82
|
An explosion is nothing more than a bunch of particles (be them pixels, small shapes or images) scattered across the screen, originating from a single point. Not all the time but mostly and for the sake of simplicity we’ll assume that all particles originate from a single point.
Just think of fireworks. A tiny little rocket shoots up and explodes into hundreds of sparkling little stars that fade out as they fall down. What happens is that a huge force in the center of the rocket rips the body apart (thus creating particles) and scatters them randomly around the point of explosion.
To make it simple we’ll create a few particles, we will place them in one place (the origin) and give them random forces. A force is a vector quantity. It means that it has magnitude and direction. The magnitude will determine its speed and its direction will tell the particle which way to go.
The Particle
The class file:
public class Particle { public static final int STATE_ALIVE = 0; // particle is alive public static final int STATE_DEAD = 1; // particle is dead public static final int DEFAULT_LIFETIME = 200; // play with this public static final int MAX_DIMENSION = 5; // the maximum width or height public static final int MAX_SPEED = 10; // maximum speed (per update) private int state; // particle is alive or dead private float width; // width of the particle private float height; // height of the particle private float x, y; // horizontal and vertical position private double xv, yv; // vertical and horizontal velocity private int age; // current age of the particle private int lifetime; // particle dies when it reaches this value private int color; // the color of the particle private Paint paint; // internal use to avoid instantiation }
The particle is nothing more than a little rectangle (this can be an image, circle or any other shape, but in our case we use a rectangle) with a few properties.
It has a state. This indicates whether the particle is alive or dead. A particle is alive when its color is not black (it hasn’t faded) and its age hasn’t reached its lifetime. More on this a bit later.
It has a position. It’s position in a 2D coordinate system is represented by 2 points: x and y.
It also has a speed and a direction. As you recall speed is a vector so it has 2 components in 2D. In 3D it will also have the z component but we stay in 2D for now. To keep it simple now we add two properties for this. vx and vy
The age of the particle is it’s 0 in the beginning and is incremented at each update.
The lifetime is the maximum age a particle can reach before it dies.
The rest are color and paint. These are for drawing only.
If you recall the previous entries, the game update is nothing more than calling the update methods of every entities in the game and displaying them. The update method of the particle is pretty simple.
But first we need to create the particle:
public Particle(int x, int y) { this.x = x; this.y = y; this.state = Particle.STATE_ALIVE; this.widht = rndInt(1, MAX_DIMENSION); this.height = this.widht; this.lifetime = DEFAULT_LIFETIME; this.age = 0; this.xv = (rndDbl(0, MAX_SPEED * 2) - MAX_SPEED); this.yv = (rndDbl(0, MAX_SPEED * 2) - MAX_SPEED); // smoothing out the diagonal speed if (xv * xv + yv * yv > MAX_SPEED * MAX_SPEED) { xv *= 0.7; yv *= 0.7; } this.color = Color.argb(255, rndInt(0, 255), rndInt(0, 255), rndInt(0, 255)); this.paint = new Paint(this.color); }
Check the creation of a particle and it should be straight forward.
You will notice that a particle is created at position x,y.
The state is set to alive.
We want to randomise the size of the rectangles because an explosion creates particles in different sizes and shapes but we’ll just randomise the size and colour.
I have written a few helper methods that give me random numbers, for this check the complete source code.
Next the lifetime is set. Every particle will have the same lifetime.
The age is 0 of course as the particle has just been born.
Next is the interesting bit. It’s very amateurish. To set the speed I have used 2 random numbers for the 2 components of the speed vector (vx and vy). The smoothing is needed because if both components are near the maximum value then the resulting magnitude will be over the max speed. You could use simple trigonometric functions with a random degree instead of this.
The las thing to set is the color which again is randomised.
There you have it.
The update() method for the particle.
public void update() { if (this.state != STATE_DEAD) { this.x += this.xv; this.y += this.yv; // extract alpha int a = this.color >>> 24; a -= 2; // fade by 2 if (a <= 0) { // if reached transparency kill the particle this.state = STATE_DEAD; } else { this.color = (this.color & 0x00ffffff) + (a << 24); // set the new alpha this.paint.setAlpha(a); this.age++; // increase the age of the particle } if (this.age >= this.lifetime) { // reached the end if its life this.state = STATE_DEAD; } } }
It’s pretty simple. Every update, the position is set according to the speed and the alpha component of the particle’s color is decremented. In other words the particle is being faded.
If the age exceeded the lifetime or the opacity is 0 (that means that it is completely transparent) the particle is declared dead.
If you wonder about the magic with colours, it is quite simple once you get the bitwise operators. Don’t worry, I’m rubbish as well, just make sure you know where to look. Here is a good explanation of colour components and how to use bitwise operators to manipulate them:. It’s faster than using the objects but you can safely use the android methods too.
Just as a side note on colours
You can specify colours in Android as an int. If you’re familiar with rgb and argb that is great.
rgb is 24 bits color while argb is 32 bits. It also has the alpha component which is transparency/opacity.
Opacity values: 0 = transparent, 255 = completely opaque.
To represent an int in hex you just prefix it with 0x. A color in hex is simple: 0x00FF00 is green for example. The pattern is: 0xRRGGBB (Red, Green, Blue). Now to add the alpha you will add it to the beginning. 0xAARRGGBB.
Because it is in hex, the values are between 00 and FF. 0 being 0 and FF being 255 in decimal.
When you create a colour out of components like color(a, r, g, b) (for example: new Color(125, 255, 0, 0) creates a semi transparent red), you can simply create it with an integer expressed in hex like this: new Color(0x80FF0000);
This is how you would extract the components of an argb colour.int color = 0xff336699; int alpha = color >>> 24; int red = color >>> 16 & 0xFF; int green = color >>> 8 & 0xFF; int blue = color & 0xFF;
The draw() method is simple again.
public void draw(Canvas canvas) { paint.setColor(this.color); canvas.drawRect(this.x, this.y, this.x + this.widht, this.y + this.height, paint); }
At this stage try to create some particles in your game panel and see what happens.
The Explosion
The explosion is nothing more than hundreds of particles originating from one place, the origin.
In the image above you see the first 4 updates of a simple explosion. All the particles have the same speed but they spread out in different directions. Each circle is one update.
The main properties of an explosion are:
public class Explosion { public static final int STATE_ALIVE = 0; // at least 1 particle is alive public static final int STATE_DEAD = 1; // all particles are dead private Particle[] particles; // particles in the explosion private int x, y; // the explosion's origin private int size; // number of particles private int state; // whether it's still active or not }
It contains an array of particles. The size is the number of particles. An explosion is alive if it has at least one particle alive.
The update is extremely simple. It iterates through all the particles and calls the update() method on each particle. The draw() ditto.
In our application we will create explosions on the screen where we touch it.
The constructor is very simple:
public Explosion(int particleNr, int x, int y) { Log.d(TAG, "Explosion created at " + x + "," + y); this.state = STATE_ALIVE; this.particles = new Particle[particleNr]; for (int i = 0; i < this.particles.length; i++) { Particle p = new Particle(x, y); this.particles[i] = p; } this.size = particleNr; }
The array of particles is being filled at the touch down position.
In our application we will allow up to 10 explosions. So in the MainGamePanel we declare an array of explosions.
private Explosion[] explosions;
In the surfaceCreated method we instantiate the array and fill it with null.
explosions = new Explosion[10]; for (int i = 0; i < explosions.length; i++) { explosions[i] = null; }
The onTouchEvent is where we create explosions.
public boolean onTouchEvent(MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN) { // check if explosion is null or if it is still active int currentExplosion = 0; Explosion explosion = explosions[currentExplosion]; while (explosion != null && explosion.isAlive() && currentExplosion < explosions.length) { currentExplosion++; explosion = explosions[currentExplosion]; } if (explosion == null || explosion.isDead()) { explosion = new Explosion(EXPLOSION_SIZE, (int)event.getX(), (int)event.getY()); explosions[currentExplosion] = explosion; } } return true; }
What we do is iterate through the explosions and when we find the first null (this means we never used it for an instance) or the first dead explosion we create a new one at the touch position.
The update and render methods are straight forward. Iterate through the explosions and if they are not null and are alive then call their update and draw methods respectively.
In the final code I added a border for the screen as the wall and added a basic collision detection for the particles so they bounce off the walls. The wall is being transmitted as a reference and the update method checks for collision with it. Use it as an exercise and remove the collision and try attaching an image to the particle instead of being a rectangle. To create explosions just click on the screen.
It should look like this:
Explore the code and have fun.
Download it here (android.particles.tgz).
Reference: Particle Explosion
The new link for
You don’t need to fill an array with null – that happens automatically. Simpler just to create the array when declaring it –
private final Explosion[] explosion = new Explosion[10];
Great tutorials!
I have been following your tuts up to this one, and am currently tyring to apply what I’ve learnt. But I have found a problem that I hope you can help me with.
To apply the previous tutorials, I’m trying to make what seemed an easy game, a bit like the guitar hero saga. Since this is only for practicing purposes, I don’t intend to make it “playable” (at least not yet), just want to make the entities behave as they should.
The game should work as follows: “beat lines” should appear at regular periods and move from the top to the bottom of the screen. Attached to some of the lines, there should be some circles, representing the actual beats, that the player has to tap on to get points.
Up to now, the only entities I’m working with are the beat lines. I managed to make them appear ok, just not at regular paces.
I adapted your tutorial on particles (fireworks) to make the beat lines: there is an array of particles (in my case, lines instead of rectangles) that fall at a regular speed. Unlike the firework particles, the lines aren’t created all at the same time, but after a few clock ticks.
To calculate the clock ticks, my app works much like the Elaine tutorial, but instead of displaying the next frame, the class notifies the main game thread that the clock has ticked. So, after a regular number of ticks, another beat line is created in the array. To make the game faster, the number of ticks between lines would be decreased (not implemented yet) so that they start to appear closer to each other, and therefore, faster (that is why beat lines are individual entities instead of just displaying and moving some bitmap).
At most, there are 10 lines at the same time in screen, so I guess my problem shouldn’t be related to performance. Actually, I hope it is not, because if I am able to make it work, the next step is to place the beats on the lines and make them fall accordingly, and that would represent a much bigger performance problem.
Can you give me any hint? Am I doing something wrong, or is my approach totally wrong?
Thanks in advance for your help, keep up the good work!
|
http://www.javacodegeeks.com/2011/08/android-game-development-particle.html/comment-page-1/
|
CC-MAIN-2015-40
|
refinedweb
| 2,176
| 65.01
|
I am starting back up on my C++ programming and I've been out of it for about 3 years. I am going back through one of my books and it asks you to create a game.
Guess My Number:
-Player picks number 1-100
-Computer proceeds to guess a number till it hits the number you chose
-Be creative!
I have already written the program about 30 min ago and after some trial and error, it seems to be working except for:
-Kinda sloppy/redundant
-Once the computer guesses the correct number, it skips the output of the correct number and proceeds the the end of the program.
Code://Guess My Number //Dustin Hardin 12-08-09 /* The Player picks a number between 1 and 100 and the computer guesses till it gets it right. */ #include <iostream> #include <cstdlib> #include <ctime> using namespace std; int main() { int theNumber; int tries = 0, guess, prevGuessHigh = 100, prevGuessLow = 1; cout << "\tWelcome to Guess My Number\n\n"; cout << "Player, please pick a number between 1 and 100: "; cin >> theNumber; cin.ignore(); //Generate "random" number between 1 and 100 as computers guess srand(time(0)); guess = rand() % 100 + 1; do { //display previous input, the number you picked, and ask for a number. cout << "High: " << prevGuessHigh << ", Low: " << prevGuessLow << ", The Number: " << theNumber << "\n\n"; cout << "Computer, please guess a number: " << guess << "\n\n"; cin.get(); ++tries; if(guess > theNumber) { cout << "Too High!\n\n"; prevGuessHigh = guess; guess = rand() % (prevGuessHigh - prevGuessLow) + prevGuessLow; //the number is between the high guess and low, so random between these two if(guess == prevGuessLow)//Increment guess if it's equal to the last lowest number guess++; } else if(guess < theNumber) { cout << "Too Low!\n\n"; prevGuessLow = guess; guess = rand() % (prevGuessHigh - prevGuessLow) + prevGuessLow; //the number is between the high guess and low, so random between these two if(guess == prevGuessLow)//Increment guess if it's equal to the last lowest number guess++; } }while(guess != theNumber); cout << "That's it! You got it in: " << tries << " guesses!"; cout << "\n\nPlease press the enter key to exit...."; cin.get(); return 0; }
|
https://cboard.cprogramming.com/cplusplus-programming/122436-guess-my-number-need-help.html
|
CC-MAIN-2017-22
|
refinedweb
| 346
| 66.98
|
Intruduction
Starting this kind of series by talking about domain driven design and object oriented programming is rather predictable. At first I thought I could avoid the topic for at least a couple posts, but that would do both you and me a great disservice. There are a limited number of practical ways to design the core of your system. A very common approach for .NET developers is to use a data-centric model. There’s a good chance that you’re already an expert with this approach – having mastered nested repeaters, the ever-useful ItemDataBound event and skillfully navigating DataRelations. Another solution which is the norm for Java developers and quickly gaining speed in the .NET community favors a domain-centric approach.
DDD2
What do I mean by data and domain-centric approaches? Data-centric generally means that you build your system around your understanding of the data you’ll be interacting with. The typical approach is to first model your database by creating all the tables, columns and foreign key relationships, and then mimicking this in C#/VB.NET. The reason this is so popular amongst .NET developers is that Microsoft spent a lot of time automating the mimicking process with DataAdapters, DataSets and DataTables. We all know that given a table with data in it, we can have a website or windows application up and running in less than 5 minutes with just a few lines of code. The focus is all about the data – which in a lot of cases is actually a good idea. This approach is sometimes called data driven development.
Domain-centric design or, as it’s more commonly called, domain driven design (DDD), focuses on the problem domain as a whole – which not only includes the data, but also the behavior. So we not only focus on the fact that an employee has a FirstName, but also on the fact that he or she can get a Raise. The Problem Domain is just a fancy way of saying the business you’re building a system for. The tool we use is object oriented programming (OOP) – and just because you’re using an object-oriented language like C# or VB.NET doesn’t mean you’re necessarily doing OOP.
The above descriptions are somewhat misleading – it somehow implies that if you were using DataSets you wouldn’t care about, or be able to provide, the behavior of giving employees a raise. Of course that isn’t at all the case – in fact it’d be pretty trivial to do. A data-centric system isn’t void of behavior nor does it treat them as an after though. DDD is simply better suited at handling complex systems in a more maintainable way for a number of reasons – all of which we’ll cover in following posts. This doesn’t make domain driven better than data driven – it simply makes domain driven better than data driven in some cases and the reverse is also true. You’ve probably read all of this before, and in the end, you simply have to make a leap of faith and tentatively accept what we preach – at least enough so that you can judge for yourself.
(It may be crude and a little contradictory to what I said in my introduction, but the debate between The MSDN Way and ALT.NET could be summed up as a battle between data driven and domain driven design. True ALT.NETers though, ought to appreciate that data-driven is indeed the right choice in some situations. I think much of the hostility between the “camps” is that Microsoft disproportionately favors data-driven design despite the fact that it doesn’t fit well with what most .NET developers are doing (enterprise development), and, when improperly used, results in less maintainable code. Many programmers, both inside and outside the .NET community, are probably scratching their heads trying to understand why Microsoft insists on going against conventional wisdom and clumsily playing follow the leader with a 5+ year lag (witness the recent announcement of a MVC pattern slated for 2008)).
Users, Clients and Stakeholders
Something which I take very seriously from Agile development is the close interaction the development team has with clients and users. In fact, whenever possible, I don’t see it as the development team and the client, but a single entity: the team. Whether you’re fortunate enough or not to be in such a situation (sometimes lawyers get in the way, sometimes clients aren’t available for that much commitment, etc.) it’s important to understand what everyone brings to the table. The client is the person who pays the bills and as such, should make the final decisions about features and priorities. Users actually use the system. Clients are oftentimes users, but rarely are they the only user. A website for example might have anonymous users, registered users, moderators and administrators. Finally, stakeholders consist of anyone with a stake in the system. The same website might have a sister or parent site, advertisers, PR or domain experts.
Clients have a very hard job. They have to objectively prioritize the features everyone wants, including their own and deal with their finite budget. Obviously they’ll make wrong choices, maybe because they don’t fully understand a user’s need, maybe because you made a mistake in the information you provided, or maybe because they improperly give higher priority to their own needs over everyone elses (a lot like the big screen TV I bought my girlfriend for her birthday). As a developer, it’s your job to help them out as much as possible and deliver on their needs.
Whether you’re building a commercial system or not, the ultimate measure of its success will likely be how users feel about it. So while you’re working closely with your client, hopefully both of you are working towards your users needs. If you and your client are serious about building systems for users, I strongly encourage you to read up on User Stories – a good place to start is Mike Cohn’s excellent User Stories Applied.
Finally, and the main reason this little section exists, are domain experts. Domain experts are the people who know all the ins and outs about the world your system will live in. I was recently part of a very large development project for a financial institute and there were literally hundreds of domain experts most of which being economists or accountants. These are people who are as enthusiastic about what they do as you are about programming. Anyone can be a domain expert – a clients, a user, a stakeholder and, eventually, even you. Your reliance on domain experts grows with the complexity of a system.
The Domain Object
As I said earlier, object oriented programming is the tool we’ll use to make our domain-centric design come to life. Specifically, we’ll rely on the power of classes and encapsulation. In this part we’ll focus on the basics of classes and some tricks to get started – many developers will already know everything covered here. We won’t cover persistence (talking to the database) just yet. If you’re new to this kind of design, you might find yourself constantly wondering about the database and data access code. Try not to worry about it too much. In the next part we’ll cover the basics of persistence, and in following parts, we’ll look at persistence in even greater depth.
The idea behind domain driven design is to build your system in a manner that’s reflective of the actual problem domain you are trying to solve. This is where domain experts come into play – they’ll help you understand how the system currently works (even if it’s a manual paper process) and how it ought to work. At first you’ll be overwhelmed by their knowledge – they’ll talk about things you’ve never heard about and be surprised by your dumbfounded look. They’ll use so many acronyms and special words that’ll you’ll begin to question whether or not you’re up to the task. Ultimately, this is the true purpose of an enterprise developer – to understand the problem domain. You already know how to program, but do you know how to program the specific inventory system you’re being asked to do? Someone has to learn someone else’s world, and if domain experts learn to program, we’re all out of jobs. you, and your code talks about StrategicOutcome then some of the ambiguity and much of the potential misinterpretation is cleaned up. Many people, myself included, believe that a good place to start is with key noun-words that your business experts and users use. If you were building a system for a car dealership and you talked to a salesman (who is likely both a user and a domain expert), he’ll undoubtedly talk about Clients, Cars, Models, Packages and Upgrades, Payments and so on. As these are the core of his business, it’s logical that they be the core of your system. Beyond noun-words is the convergence on the language of the business – which has come to be known as the ubiquitous language (ubiquitous means present everywhere). The idea being that a single shared language between users and system is easier to maintain and less likely to be misinterpreted.
Exactly how you start is really up to you. Doing Domain Driven Design doesn’t necessarily mean you have to start with modeling the domain (although it’s a good idea!), but rather it means that you should focus on the domain and let it drive your decisions. At first you may very well start with your data model, when we explore test driven development we’ll take a different approach to building a system that fits very well with DDD. For now though, let’s assume we’ve spoken to our client and a few salespeople, we’ve realized that a major pain-point is keeping track of the inter-dependency between upgrade options. The first thing we’ll do is create four classes:
public class Car { } public class Mode { } public class Package { } public class Upgrade { }
Next we’ll fill-in these classes with some safe assumptions:
using System.Collections.Generic; using System.Collections.ObjectModel; public class Car { private Model _model; private List<Upgrade> _upgrades; public void Add(Upgrade upgrade) { //todo } } public class Model { private int _id; private int _year; private string _name; public ReadOnlyCollection<Upgrade> GetAvailableUpgrades() { //todo return null; } } public class Upgrade { private int _id; private string _name; public ReadOnlyCollection<Upgrade> RequiredUpgrades { get { //todo return null; } } }
Things are quite simple. We’ve added some pretty traditional fields (id, name), some references (both Cars and Models have Upgrades), and an Add function to the Car class. Now we can make slight modifications and start writing a bit of actual behavior.
using System.Collections.Generic; using System.Collections.ObjectModel; public class Car { private Model _model; //todo where to initialize this?(); } }
First, we’ve implemented the Add method. Next we’ve implemented a method that lets us retrieve all missing upgrades. Again, this is just a first step; the next step could be to track which upgrades are responsible for causing missing upgrades, i.e. You must select 4 Wheel Drive to go with your Traction Control; however, we’ll stop for now. The purpose was just to highlight how we might get started and what that start might look like.
UI
You might have noticed that we haven’t talked about UIs yet. That’s because our domain is independent of the presentation layer – it can be used to power a website, a windows application or a windows service. The last thing you want to do is intermixed your presentation and domain logic. Doing so won’t only result in hard-to-change and hard-to-test code, but it’ll also make it impossible to re-use our logic across multiple UIs (which might not be a concern, but readability and maintainability always is). Sadly though, that’s exactly what many ASP.NET developers do – intermix their UI and domain layer. I’d even say it’s common to see behavior throughout ASP.NET button click handlers and page load events. The ASP.NET page framework is meant to control the ASP.NET UI – not to implement behavior. The click event of the Save button shouldn’t validate complex business rules (or worse, hit the database directly), rather its purpose is to modify the ASP.NET page based on the results on the domain layer – maybe it ought to redirect to another page, display some error messages or request additional information.
Remember, you want to write cohesive code. Your ASP.NET logic should focus on doing one thing and doing it well – I doubt anyone will disagree that it has to manage the page, which means it can’t do domain functionality. Also, logic placed in codebehind will typically violate the Don’t Repeat Yourself principal, simply because of how difficult it is to reuse the code inside an aspx.cs file.
With that said, you can’t wait too long to start working on your UI. First of all, we want to get client and user feedback as early and often as possible. I doubt they’ll be very impressed if we send them a bunch of .cs/.vb files with our classes. Secondly, making actual use of your domain layer is going to reveal some flaws and awkwardness. For example, the disconnected nature of the web might mean we have to make little changes to our pure OO world in order to achieve a better user experience. In my experience, unit tests are too narrow to catch these quirks while they are plainly visible as you create your real UI.. In fact, out of everything, the impact on the UI is probably the least significant. Of course, it shouldn’t surprise you to know that ALT.NET’ers also think you should keep your mind open when it comes to your presentation engine. The ASP.NET Page Framework isn’t necessary the best tool for the job – a lot of us consider it unnecessarily complicated and brittle. We’ll talk about this more in a later part, but if you’re interested to find out more, I suggest you look at MonoRails (which is a Rails framework for .NET) and find out about the new page framework being released by Microsoft in 2008. The last thing I want is for anyone to get discouraged with the vastness of changes, so for now, let’s get back on topic.
Tricks and Tips
We’ll finish off by looking at some useful things we can do with classes. We’ll only cover the tip of the iceberg, but hopefully the information will help you get off on the right foot.
Factory Pattern
What do we do when a Client buys a new Car? Obviously we need to create a new instance of Car and specify the model. The traditional way to do this is to use a constructor and simply instantiate a new object with the new keyword. A different approach is to use a factory to create the instance:
using System.Collections.Generic; public class Car { private Model _model; private List<Upgrade> _upgrades; private Car() { _upgrades = new List<Upgrade>(); } public static Car CreateCar(Model model) { Car car = new Car(); car._model = model; return car; } }
There are two advantages to this approach. First, we can return a null object, which is impossible to do with a constructor – this may or may not be useful in your particular case. Secondly, if there are a lot of different ways to create an object, it gives you the change to provide more meaningful function names. The first example that comes to mind is when you want to create an instance of a User class, you’ll likely have User.CreateByCredentials(string username, string password), User.CreateById(int id) and User.GetUsersByRole(string role). You can accomplish the same functionality with constructor overloading, but rarely with the same clarity. Truth be told, I always have a hard time deciding which to use, so it’s really a matter of taste and gut feeling.
Interfaces
Interfaces will play a big part in helping us create maintainable code. We’ll use them to decouple our code as well as create mock classes for unit testing. An interface is a contract which any implementing classes must adhere to. Let’s say that we want to encapsulate all our database communication inside a class called SqlServerDataAccess such as:
using System.Collections.Generic; internal class SqlServerDataAccess { internal List<Upgrade> RetrieveAllUpgrades() { //todo implement return null; } } public class Sample { public void SampleMethod() { SqlServerDataAccess da = new SqlServerDataAccess(); List<Upgrade> upgrades = da.RetrieveAllUpgrades(); } }
You can see that the sample code at the bottom has a direct reference to SqlServerDataAccess – as would the many other methods that need to communicate with the database. This highly coupled code is problematic to change and difficult to test (we can’t test SampleMethod without having a fully functional RetrieveAllUpgrades method). We can relieve this tight coupling by programming against an interface instead:
using System.Collections.Generic; internal interface IDataAccess { List<Upgrade> RetrieveAllUpgrades(); } internal class DataAccess { internal static IDataAccess CreateInstance() { return new SqlServerDataAccess(); } } internal class SqlServerDataAccess : IDataAccess { public List<Upgrade> RetrieveAllUpgrades() { //todo implement return null; } } public class Sample { public void SampleMethod() { IDataAccess da = DataAccess.CreateInstance(); List<Upgrade> upgrades = da.RetrieveAllUpgrades(); } }
We’ve introduced the interface along with a helper class to return an instance of that interface. If we want to change our implementation, say to an OracleDataAccess, we simply create the new Oracle class, make sure it implements the interface, and change the helper class to return it instead. Rather than having to change multiple (possibly hundreds), we simply have to change one.
This is only a simple example of how we can use interfaces to help our cause. We can beef up the code by dynamically instantiating our class via configuration data or introducing framework specially tailored for the job (which is exactly what we’re going to do). We’ll often favor programming against interfaces over actual classes, so if you aren’t familiar with them, I’d suggest you do some extra reading.
Information Hiding and Encapsulation
Information hiding is the principle that design decisions should be hidden from other components of your system. It’s generally a good idea to be as secretive as possible when building classes and components so that changes to implementation don’t impact other classes and components. Encapsulation is an OOP implementation of information hiding. Essentially it means that your objects data (the fields) and as much as the implementation should not be accessible to other classes. The most common example is making fields private with public properties. Even better is to ask yourself if the _id field even needs a public property to begin with.
Access Modifiers
As you focus on writing classes that
encapsulate the behavior of the business a rich API is going to emerge
for your UI to consume. It’s a good idea to keep this API clean and
understandable. The simplest method is to keep your API small by hiding
all but the most necessary methods. Some methods clearly need to be
public and others private, but if ever you aren’t sure, pick a more
restrictive access modifier and only change it when necessary. I make
good use of the internal modifier on many of my methods and properties.
Internal members are only visible to other members within the same
assembly – so if you’re physically separating your layers across
multiple assemblies (which is generally a good idea), you’ll greatly
minimize your API.
Conclusion
The reason enterprise development exists is that no single off-the-shelf product can successfully solve the needs of a complex system. There are simply too many odd or intertwined requirements and business rules. To date, no paradigm has been better suited to the task than object oriented programming. In fact, OOP was designed with the specific purpose of letting developers model actual systems in code. It may still be difficult to see the long-term value of domain driven design. Sharing a common language with your client and users in addition to having greater testability may not seem necessary. Hopefully as you go through the remaining parts and experiment on your own, you’ll start adopting some of the concepts and tweaking them to fit your and your clients needs.
It’s refreshing to read and even-handed discussion of data- versus domain-centric development. Thanks for the post and the series.
Karl, are you planning to write more on the topic of object-relational impedance mismatch, common design patterns / practices for domain objects which correspond to one-to-many, many-to-many, and look-up types of relationships between database objects (tables)? You started to write on that matter in “Domain vs. Data Model” part of The Code Wiki, but it would be great to expand on those issues.
Which books/articles would you recommend to read?
After struggling to find other good articles on N-Tier approach (there is a bunch of junk on) I finally discovered one more good article: Building Layered Web Applications with Microsoft ASP.NET 2.0 by Imar Spaanjaars.
He uses custom DAL objects, clearly separated from Domain objects.
Wow. Great article. I’ve been trying to make myself into a decent OO developer for most of my career but I keep getting sucked back into Data Driven Design (or worse)–sometimes for expediency or because of existing team culture, but mostly because of my own limitations. This series is renewing my determination to expand my thinking and helping me refine my direction for doing so.
Here’s a tangentially related question that has been in the back of my mind for a while. I’m not exactly new to the concept of programming to interfaces, but most of my projects have dealt with a fairly simple and narrow domain, in a predictable environment. I always find myself at the start of a project asking, “OK, what needs to be defined as an interface?” Then as I think through future scenarios I usually can’t see a likely situation where I would ever need to instanciate different implementations of the same interface. In those cases I usually forego the interface definition. Aside from eliminating the ability to easily substitute implementations, what other problems does omitting interfaces in that situation introduce?
Thanks again for the great article!
Thanks Karl, i will read the rest of your (very sensible) articles when they appear.
FYI here is a DNA link
My understanding of it was that it was simply a marketing term, but it encompassed the separation of layers, and first became a buzzword during the days of COM development with microsoft tools.
Joe: I don’t know much about DNA, but I think DNA is tied to specific tools (Microsoft-created tools of course)
as they say on,
“Domain-driven design is not a technology or a methodology. It is a way of thinking and a set of priorities, aimed at accelerating software projects that have to deal with complicated domains.”
Yes, domain driven design is a lot about a logic layer (domain/business/logic layer are all the same thing in my mind), but also about the things I mentioned in part 1, testability, simplicity, DRY, information hiding and so forth. These are higher level concepts – less about the API, more about the pattern.
Could be talking out of my ass though … read the rest of the articles and you be the judge
hi Karl, thanks for this great article series, hope to see next part soon
Is Domain Driven Design another fancy term for Business Logic Layer (i.e. typical microsoft Windows DNA architecture), or is there some extra meeting to DDD that i am missing?
Renaud: that’s a good point. I hadn’t planned on addressing UI in this series, but I agree that it could be useful to approach these concepts from that point of view. I’ll think about it…I have considered expanding on this on DDD and OOP and your suggestion might be the right angle…
Nice intro on DDD.
I think an interesting aspect to explore, in complement to what you say here, is : how will the UI actually use the domain.
For instance, if we must develop a screen that will list the cars currently in the dealership, what class will have to be called to retrieve the proper Cars collection.
I believe that would made the domain more understandable to a lot of developpers if they can figure out how it plays out with all the other parts of the application.
I’ve read these sorts of articles many times before, but the way you explain it is excellent.
Thanks for your contribution – I look forward to the remaining posts in the series.
Thanks Karl, I already know RoR. will definetly check that book. Right now data driven design is the only way i know and althought I always read about DDD i’m not really sure how to apply it. However, i would like very much to give it shot since i know first hand how data driven end being very complicated very fast as the number of entites are usually are multiple of the number of tables and the code becomes blotted very fast.
Thanks again.
Thanks for sharing your ideas Karl. You make all of this very understandable and the examples you use are very good.
I’m looking forward to your next posts!
jhunter:
That’s a good question, and I agree the article missed the boat a little with respect to that (although it’s hard for me to address without sounding like I have nothing good to say about data driven design).
I think, in practice, a core difference is that the code is actually decoupled from the data layer making it possible to unit test and easier to maintain. This is probably possible with data driven design, but not if you use datasets as your actual classes – which to the best of my knowledge cannot be separated in any meaningful way from the underlying DB.
If you’ve been creating classes like Car and Upgrade and have been write methods in those to express behavior, that you very well may have been doing DDD all along. In this case, you’d likely have a DAL which may or may not make use of datasets. I’ve seen hybrids that use domain objects ultimately are coupled to datasets (typed-datasets are a good built-in example of such a hybrid) and the abstraction always fails – at least when trying to write tests.
Whether or not this is the official meaning of data driven design I don’t know, but I’d say ASP.NET code that connects directly to the database to pull or store data is a prime example of data driven design. There is no domain (which means you’ve lost the best tool to implement behaviour: OOP) and you’re pretty much dealing in data – passing it back and forth. You might validate, you might have helper methods that enforce certain rules and relationships, but it was designed with the goal of getting property X into column Y.
I know..still vague …
Thank you for sharing your knowledge! The article is wonderful, keep on writing…
DaRage:
Probably the best one is Jimmy Nilsson’s Applying Domain-Driven Design and Patterns ()
I actually think The Pragmatic Programmers Agile Web Development with Rails is a perfect example of how to intermix an API/Framework book with a design concept. After reading it, you realize that almost every (maybe all??) ASP.NET book is just prettied up version of MSDN documentation…Although I wouldn’t recommend that book unless you wanna learn RoR as well.
Either I don’t understand domain driven design or I’ve always done it?
How is it different than data driven design in practice?
Great read.. looking for more. I liked the recommendation for the user stories book. What book(s) do you recommend for DDD? I know that Bruce Eckel (the author of thinking in java) has a CD course series but it’s $500.
This is a great post Karl. I love it!
Too often we focus on very complex edge conditions and forget that our first and foremost goal should be to teach by example!
You can count on an extension from me in the next few days….
Cheers,
Greg
Great stuff, thanks. Gonna refactor bigtime after we launch.
This is exactly the type of stuff the alt.net’ers should be doing. Thanks
Great article Karl.
|
http://codebetter.com/karlseguin/2007/11/29/foundations-of-programming-pt-2-domain-domain-domain/
|
CC-MAIN-2014-42
|
refinedweb
| 4,831
| 60.75
|
On Wed, Nov 16, 2011 at 1:06 PM, PJ Eby <pje at telecommunity.com> wrote: > On Wed, Nov 16, 2011 at 1:21 PM, Eric Snow <ericsnowcurrently at gmail.com> > wrote: >> >> But which is more astonishing (POLA and all that): running your module >> in Python, it behaves differently than when you import it (especially >> __name__); or you add an __init__.py to a directory and your *scripts* >> there start to behave differently? > > To me it seems that the latter is more astonishing because there's less > connection between your action and the result. If you're running something > differently, it makes more sense that it acts differently, because you've > changed what you're *doing*. In the scripts case, you haven't changed how > you run the scripts, and you haven't changed the scripts, so the change in > behavior seems to appear out of nowhere. Well, then I suppose both are astonishing and, for me at least, the module-as-script side of it has bit me more. Regardless, both are a consequence of the script vs. module situation. > >> >> When I was learning Python, it took quite a while before I realized >> that modules are imported and scripts are passed at the commandline; >> and to understand the whole __main__ thing. > > It doesn't seem to me that PEP 395 fixes this problem. In order to > *actually* fix it, we'd need to have some sort of "package" statement like > in other languages - then you'd declare right there in the code what package > it's supposed to be part of. Certainly an effective indicator that a file's a module and not a script. Still, I'd rather we find a way to maintain the filesystem-based package approach we have now. It's nice not having to look in each file to figure out the package it belongs to or if it's a script or not. The consequence is that a package that's spread across multiple directories is likewise addressed through the filesystem, hence PEPs 382 and 402. However, the namespace package issue is a separate one from script-vs-module. > >> >> It has always been a pain, particularly when I wanted to >> just check a module really quickly for errors. > > What, specifically, was a pain? That information might be of more use in > determining a solution. > > If you mean that you had other modules importing the module that was also > __main__, then I agree that having a solution for __main__-aliasing is a > good idea. PEP 395 spells out several pretty well. Additionally, running a module as a script can cause trouble if your module otherwise relies on the value of __name__. Finally, sometimes I rely on a module triggering an import hook, though that is likely a problem just for me. > I just think it might be more cleanly fixed by checking whether > the __file__ of a to-be-imported module is going to end up matching > __main__.__file__, and if so, alias __main__ instead. Currently the only promise regarding __file__ is that it will be set on module object once the module has been loaded but before the implicit binding for the import statement. So, unless I'm mistaken, that would have to change to allow for import hooks. Otherwise, sure. > >> >> However, lately I've actually taken to the idea that it's better to >> write a test script that imports the module and running that, rather >> than running the module itself. But that came with the understanding >> that the module doesn't go through the import machinery when you *run* >> it, which I don't think is obvious, particularly to beginners. So >> Nick's solution, to me, is an appropriate concession to the reality >> that most folks will expect Python to treat their modules like modules >> and their scripts like scripts. > > You lost me there: if most people don't understand the difference, then why > are they expecting a difference? > Yeah, that wasn't clear. :) When someone learns Python, they probably are not going to recognize the difference between running their module and importing it. They'll expect their module to work identically if run as a script or imported. They won't even think about the distinction. Or maybe I'm really out of touch (quite possible :). It'll finally bite them when they implicitly or explicitly rely on the module state set by the import machinery (__name__, __file__, etc.), or on customization of that machinery (a la import hooks). Educating developers on the distinction between scripts and modules is good, but it seems like PEP 395 is trying to bring the behavior more in line with the intuitive behavior, which sounds good to me. Regarding the PEP 402 conflict, if using .pyp on directory names addresses Nick's concern, would you be opposed to that solution? -eric p.s. where should I bring up general discussion on PEP 395?
|
https://mail.python.org/pipermail/import-sig/2011-November/000388.html
|
CC-MAIN-2016-36
|
refinedweb
| 819
| 61.67
|
31 May 2011 22:10 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
The turnaround at the 190,000 tonne/year plant is expected to last between 40-47 days, the source said.
Buyer sources said the turnaround has been postponed several times because of inadequate inventories ahead of the scheduled dates.
The emergency inventories appeared to be under control this time, with some imports already arriving in the country to keep buyers supplied during the procedure.
About 30,000 tonnes of polyethylene (PE) were scheduled to arrive between now and September, together with 12,000 tonnes of polypropylene (PP) to arrive during the same period, industry sources said.
Ethylene production was scheduled to stop in order to coincide with the PE plant maintenance, but this could not be confirmed on Tuesday.
Buyers said the government was setting up credit lines with state banks to finance the purchases of imported material, which will be sold at international
|
http://www.icis.com/Articles/2011/05/31/9464954/venezuela-lldpe-turnaround-to-start-in-june.html
|
CC-MAIN-2014-15
|
refinedweb
| 156
| 50.06
|
Unobtrusive Ruby is any Ruby code that stays out of your way. It does not make you write lots of boilerplate, or stub methods, or open classes. It is decoupled. Its tests run quickly, its classes fit on one screen, its methods are tiny, and it is quickly refactorable.
Unobtrusive Ruby is a state of mind. Unobtrusive Ruby is what you want your friends to write.
Since you love your friends, here are some guidelines while you design your next gem or framework:
Take objects, not classes
All your arguments should be instantiated objects, where the caller has invoked the
.new method herself. Leave the constructor up to the class author. Never force a calling convention on a constructor, or in any way force someone to look up how to instantiate her own class.
Never require inheritance
Object inheritance is one way to force the user—your friend—to do odd things with the constructor. When to call super, what the arguments are, and so on. This goes for mixins, too; they add complexity to the class's namespace, increasing the cognitive overhead. All inheritance adds brittleness.
Inject dependencies
The contents of a class should mention no other classes by name. Pass in all objects that it must use. Not only will you thank yourself for the flexibility later, you'll notice that your tests are better at the beginning.
Unit tests are fast
A unit test on a small class should be fast, and classes should be small. Mock objects, a lack of inheritance, and dependency injection will see to it. The tests for your friend's code should not have the same dependencies as the tests for your own code. The ideal dependencies are nothing more than the test framework and the class under test.
Require no interface
Classes should care about composing instead of being composed.The best class will consume instead of being consumed. It makes its own rules. Maybe you can't get there—something has to be consumed at some point—but it's a useful goal. If it's not possible, the required interface must be simple. One method, maybe two.
Data, not action
Data is easy to test: pass some values into a method, receive a value from the method, then verify that it's the right value. Never force your friend to do anything more than this. Never force them to do database lookups, system calls, random number generation, and so on; do that for them.
Always be coding
The overall idea here is that your friend should always be able to just write code. You never want to stand in her way; never force documentation or change her classes or invent bizarre loopholes. You, as the gem author and framework designer, want your friends to build their system without noticing the system you've built.
Unobtrusive Ruby does what you expect.
|
http://robots.thoughtbot.com/unobtrusive-ruby
|
CC-MAIN-2014-15
|
refinedweb
| 481
| 74.39
|
MOD: Couldn't find anything about this in forum rules so I think its ok, but please delete if it is against the rules. Also if you need to move it to the Affiliate section that's fine, to me its more of a JV that's why I put it here. I'm about to launch a new site of mine and in doing so I will be signing up for a new hostgator hosting account. I thought I would sign up under someones affiliate link if they offer me something in return (prefereably linkbuilding or content). If you'd like me to sign up under your offer just let me know what you would do in return. PicStar
|
https://www.blackhatworld.com/seo/jv-what-can-you-offer.259704/
|
CC-MAIN-2017-26
|
refinedweb
| 120
| 73.51
|
Updated by mrkn (Kenta Murata) 6 months ago
Do you want the way to load and dump the memory view metadata of any objects that support exporting their memory view?
Could you please tell me the example use cases you've assumed?
Updated by mrkn (Kenta Murata) 6 months ago
Updated by dsisnero (Dominic Sisneros) 6 months ago
On the consumer side, we can Marshal those objects the usual way, which when unserialized will give us a copy of the original object:
b = ZeroCopyByteArray.new("abc".bytes)
data = Marshal.dump(b)
new_b = Marshal.load(data)
puts b == new_b # True
puts b.equal? new_b # False: a copy was made
But if we pass a buffer_callback and then give back the accumulated buffers when unserializing, we are able to get back the original object:
b = ZeroCopyByteArrayi.new("abc".bytes)
buffers = []
data = Marshal.dump(b, buffer_callback: buffers.method('append')
new_b = Marshal.load(data, buffer: buffers)
puts b == new_b # True
puts b.equal? new_b # True: no copy was made
class ZeroCopyByteArray < Arrow::Buffer
def _dump()
if Marshal.protocol >= 5
return self.class._reconstruct(MarshalBuffer.new(self), nil
else
# PickleBuffer is forbidden with Marshal protocols <= 4.
return type(self)._reconstruct, (bytearray(self),)
end
def self._load( obj)
m = MemoryView.new(obj)
obj = m.obj
if obj.class == self.class
return obj
else
return new(obj)
end
end
end
Updated by mrkn (Kenta Murata) 6 months ago
You cannot get the original object from
Marshal.load. This is
Marshal.load's nature.
Marshal.load always creates a new object (the different object from the original one).
Object#equal? compares object identities, so
b.equal? new_b is always false.
Updated by dsisnero (Dominic Sisneros) 6 months ago
that is the case now. I am proposing changing Marshal to allow Marshal to load into an existing object for object identities. This is one of the things python's latest pickle format allows. They use it to marshal large numpy arrays to a distributed object store. See my original link.
Updated by mrkn (Kenta Murata) 6 months ago
The object identity in Ruby is defined by the value of
object_id.
Object#equal? just compares the value of
object_id.
No more than one object has the same value of
object_id.
Marshal cannot generate an object whose
equal? returns true for the other object because no more than one objects have the same value of
object_id.
What is the reason why you stick to
equal? method and Marshal combination? Doesn't
== work well for your purpose?
Updated by mrkn (Kenta Murata) 6 months ago
- Status changed from Open to Feedback
Also available in: Atom PDF
|
https://bugs.ruby-lang.org/issues/17685?tab=history
|
CC-MAIN-2021-39
|
refinedweb
| 434
| 60.82
|
Amazon Echo on Raspberry Pi
12,281
124
14:
DISCLAIMER: This video and description contains affiliate links, which means that if you click on one of the product links, I’ll receive a small commission. This help support the channel and allows us to continue to make videos like this. Thank you for the support!
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Participated in the
Raspberry Pi Contest 2016
Participated in the
Full Spectrum Laser Contest 2016
Participated in the
Hack Your Day Contest
3 People Made This Project!
Tany44 made it!
AlphaNomad made it!
vcosomano made it!
14 Discussions
2 years ago
2 years ago
I got this error:
Traceback (most recent call last):
File "./auth_web.py", line 3, in <module>
import cherrypy
ImportError: No module named cherrypy
-------------------------
how do i install cherry pi? Excuse the noob question :)
3 years ago
Hey, I managed to setup Alexa on my Raspberry Pi by following all the instructions that you have provided in this video without any errors. After manually executing the main.py script in the terminal, I tried to connect GPIO pin 18(#12) & GPIO pin 3(#5) and spoke my command into the USB microphone. Unfortunately there was no output audio. I know my Audio Jack of my RasPi is working properly since I played a couple of YouTube videos just to test whether the audio was working fine.
Reply 3 years ago
Your video really seems to explain the installation and code process very well. Could you add a couple of steps regarding setting up the USB microphone, the PushButton, Autuomatically Starting the Program on Boot,How to try out a command,etc
3 years ago
Do you think is possible to install an infrared relay as a push button instead?
Reply 3 years ago
Do not need a relay, just an IR remote and an IR sensor on RPi GPIO
3 years ago
So this requires a push button to trigger listening? So this is more of an "Amazon Tap" then an Echo?
3 years ago
I'm on university WiFi, and I'm not sure I can assign a manual ip to my raspberry pi without messing up the system. Any way around this?
3 years ago
Thank you again for this great instructable! Here's my finished one and a link to it in action at youtube:
3 years ago
Ouch. I put this together as your video instructs but got an error in the setup for line 32 and line 38. I see others have had this issue but can't see where/if there is a fix. Please advise. Thanks!
Reply 3 years ago
i have just updated the source code to fix this issue. please let me know if this helps
3 years ago
where is the video link?
3 years ago
Thank you for this! FYI - I'm using the Raspberry Pie as a go-between (HA bridge) with the Harmony Hub to control all home entertainment stuff. Works much better than IFTTT. Wondering if installing this on the Pi, then installing the HA bridge could act as an all-in-one thus eliminating the physical Echo all together?
3 years ago
can you elaborate more on the specific steps to take to make this happen? additional hardware requirements or options?
|
https://www.instructables.com/id/Amazon-Echo-on-Raspberry-Pi/
|
CC-MAIN-2019-35
|
refinedweb
| 568
| 72.46
|
I stumbled across a blog post detailing how to implement a powerset function in Python. So I went about trying my own way of doing it, and discovered that Python apparently cannot have a set of sets, since set is not hashable. This is irksome, since the definition of a powerset is that it is a set of sets, and I wanted to implement it using actual set operations.
>>> set([ set() ]) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'set'
Is there a good reason Python sets are not hashable?
Immutable objects are not hashable in python?
I heard about the line immutable objects are hashable, below like this, Frozensets are immutable and always hashable. But tuples are immutable but not hashable? Why?
Set of non hashable objects in python
Is there an equivalent to python set for non-hashable objects? (For instance custom class that can be compared to one another but not hashed?)
A hashable, flexible identifier in python
I’m trying to make some sort of hashable identifier in python; I need it to identify nodes in a graph. The trouble is that some nodes have different attributes. If the attributes of these nodes are po
Dict not hashable python
I looked online and cant seem to understand much of it. im new to python and was wondering how i can fix this. when running: results = getRecommendations(userCompare[0], userCompare[0][‘1’], sim_dist
Making a python user-defined class sortable, hashable
What methods need to be overridden/implemented when making user-defined classes sortable and/or hashable in python? What are the gotchas to watch out for? I type dir({}) into my interpreter to get a l
Using non-hashable Python objects as keys in dictionaries
Python doesn’t allow non-hashable objects to be used as keys in other dictionaries. As pointed out by Andrey Vlasovskikh, there is a nice workaround for the special case of using non-nested dictionari
Why there is no Hashable interface in Java
Object in Java has hashCode method, however, it is being used only in associative containers like HashSet or HashMap. Why was it designed like that? Hashable interface having hashCode method looks as
what do you mean by hashable in python
I tried searching internet but could not find the meaning of hashable. When they say objects are hashable or hashable objects what does it mean
Hashable, immutable
From a recent SO question (see Create a dictionary in python which is indexed by lists) I realized I probably had a wrong conception of the meaning of hashable and immutable objects in python. What do
Why are sets bigger than lists in python?
Why is the size of sets in Python noticeably larger than that of lists with same elements? a = set(range(10000)) b = list(range(10000)) print(‘set size = ‘, a.__sizeof__()) print(‘list size = ‘, b.__s
Answers
Because they’re mutable.
If they were hashable, a hash could silently become “invalid”, and that would pretty much make hashing pointless.
Generally, only immutable objects are hashable in Python. The immutable variant of set() — frozenset() — is hashable.
From the Python docs:().
In case this helps… if you really need to convert unhashable things into hashable equivalents for some reason you might do something like this:
from collections import Hashable, MutableSet, MutableSequence, MutableMapping def make_hashdict(value): """ Inspired by - with the added bonus that it inherits from the dict type of value so OrderedDict's maintain their order and other subclasses of dict() maintain their attributes """ map_type = type(value) class HashableDict(map_type): def __init__(self, *args, **kwargs): super(HashableDict, self).__init__(*args, **kwargs) def __hash__(self): return hash(tuple(sorted(self.items()))) hashDict = HashableDict(value) return hashDict def make_hashable(value): if not isinstance(value, Hashable): if isinstance(value, MutableSet): value = frozenset(value) elif isinstance(value, MutableSequence): value = tuple(value) elif isinstance(value, MutableMapping): value = make_hashdict(value) return value my_set = set() my_set.add(make_hashable(['a', 'list'])) my_set.add(make_hashable({'a': 1, 'dict': 2})) my_set.add(make_hashable({'a', 'new', 'set'})) print my_set
My HashableDict implementation is the simplest and least rigorous example from here. If you need a more advanced HashableDict that supports pickling and other things, check the many other implementations. In my version above I wanted to preserve the original dict class, thus preserving the order of OrderedDicts. I also use AttrDict from here for attribute-like access.
My example above is not in any way authoritative, just my solution to a similar problem where I needed to store some things in a set and needed to “hashify” them first.
|
http://w3cgeek.com/why-arent-python-sets-hashable.html
|
CC-MAIN-2019-04
|
refinedweb
| 764
| 53
|
Question
Discuss the complications role that the Internet plays as a market channel in today's business environment.
Answer to relevant QuestionsThere is an 82% chance the project below can be completed in X weeks or less. What isX? BCD’s $1,000 par value bonds currently sell for $798.40. The coupon rate is 10%, paid semi-annually. If the bonds have 5 years to maturity, what is the yield to maturity?A monopolist firm faces a demand with constant elasticity of -2.0. It has a constant marginal cost of $20 per unit and sets a price to maximize profit. If marginal cost should increase by 25 percent, would the price ...A company has a total cost of $50.00 per unit at a volume of 100,000 units. The variable cost per unit is $20.00. What would the price be if the company expected a volume of 120,000 units and used a markup of 50%?A) ...The profitability index (PI) of a project is 1.0. What do you know about the project's net present value (NPV) and its internal rate of return (IRR) from the PI?
Post your question
|
http://www.solutioninn.com/discuss-the-complications-role-that-the-internet-plays-as-a
|
CC-MAIN-2017-13
|
refinedweb
| 196
| 76.11
|
Opened 13 years ago
Closed 12 years ago
Last modified 11 years ago
#329 closed defect (fixed)
RSS framework needs an easier interface
Description
GarthK came up with a cool simpler interface to rss.FeedConfiguration:
We should use it, or something close to it.
Attachments (2)
Change History (15)
Changed 13 years ago by
comment:1 Changed 13 years ago by
comment:2 Changed 13 years ago by
Please do -- what you've got so far rocks. When you've got something you're comfortable, modify this ticket to include [patch] in the title so we'll know it's ready for review.
comment:3 Changed 13 years ago by
comment:4 Changed 13 years ago by
comment:5 Changed 13 years ago by
comment:6 Changed 13 years ago by 13 years ago by
Here is my work on the rss system
comment:7 Changed 13 years ago by 12 years ago by
comment:9 Changed 12 years ago by
comment:10 Changed 12 years ago by
comment:11 12 years ago by
comment:13 Changed 11 years ago by
Milestone Version 1.0 deleted
Mind if I keep wading into it? I'd also like to modify links, add namespace support, and tackle a few other minor issues. I'll start maintaining it as a patch to the main code.
|
https://code.djangoproject.com/ticket/329
|
CC-MAIN-2018-17
|
refinedweb
| 222
| 62.72
|
Converting a string into either from lowercase to uppercase or from uppercase to lowercase can be done in two ways i.e with pre-defined C functions and without them.
First let us see the simpler approach of using pre-defined C functions.
Approach 1: Using the functions strlwr() and strupr() from string.h:
- strlwr() converts the input string into lowercase and strupr() converts the string to uppercase. Both are a part of the string.h library.
-lwr and strupr functions. int main() { clrscr(); char string[100] ; printf("Enter a string : "); fgets(string, 100, stdin); //read string input from standard input screen. printf("The string in lower case :\n", strlwr(string)); printf("The string in upper case :\n", strupr(string)); getch(); return 0; }
Output:
Input a string: Hello! Howdy! HII The string in lowercase is as follows: hello! howdy! hii The string in uppercase is as follows: HELLO! HOWDY! HII
Approach 2: Using the functions tolower() and toupper():
- Since, the above code is not compatible with standard C library; we have an alternate approach.
- The ctype.h library includes function tolower() – to convert string to lower case and toupper() – to convert string to uppercase.
- The ctype.h header file of the C Standard Library declares several functions that are useful for testing and mapping characters.
- The difference here is that functions included under ctype.h , work on integer values.
- Hence, while case conversion the function will consider the ASCII value of the character in order to convert it to the desired case.
Declaration:Following is the declaration for tolower() and toupper() functions respectively.
int tolower(int c); int toupper(int c);
Here c is the letter to be converted to lowercase /uppercase; This function returns lowercase / uppercase equivalent to c, if such value exists, else c remains unchanged. The value is returned as an int value that can be implicitly casted to char.
Code: We keep converting character at each index into lower / uppercase, continuously checking for string end in every iteration.
#include <stdio.h> #include <ctype.h> int main () { int i=0, size = 100; char *string; printf("Input a string: "); /* These 2 lines are very important. */ string = (char *) malloc (size); getline (&string, &size, stdin); while (string[i]) //checking for null character { string[i] = tolower(string[i]); //converting to lowercase i++; } printf("\nThe string in lowercase is as follows: "); puts(string); i = 0; //initializing index while (string[i]) //checking for null character { string[i] = toupper(string[i]); //converting to lowercase i++; } printf("\nThe string in uppercase is as follows: "); puts(string); return 0; }
Output:
Input a string: Hello! Howdy! HII The string in lowercase is as follows: hello! howdy! hii The string in uppercase is as follows: HELLO! HOWDY! HII
Approach 3: Without using pre-defined functions of C:
- In this approach we will create two user defined functions upper and lower to convert case of characters in the string.
- Remember that strings are nothing but character arrays and have the same properties for function call and return as previously discussed while learning arrays.
- We know that inn computer memory the ASCII codes of the characters are actually stored and understood.
- The ASCII code for A-Z varies from 65 to 91 and for a-z varies from 97 to 123.
- So in order to convert a character to lowercase we would have to add 32 i.e 97-65 = 32; the difference between both cases to the character value.
- In order to convert string’s character to upper case we subtract 32 from character.
Code:
#include <stdio.h> void lower_string(char []); void upper_string(char []); int main() { int size = 100; char *string; printf("Enter some text of your choice: "); string = (char *) malloc (size); getline (&string, &size, stdin); lower_string(string); //function call to convert to lowercase upper_string(string); //function call to convert to uppercase return 0; } void lower_string(char str[]) { int i = 0; while (str[i] != '\0') { if (str[i] >= 'A' && str[i] <= 'Z') //checking for uppercase characters { str[i] = str[i] + 32; //converting uppercase to lowercase } i++; } printf("The string in lower case: %s\n", str); } void upper_string(char str[]) { int i = 0; while (str[i] != '\0') { if (str[i] >= 'a' && str[i] <= 'z') //checking for lowercase characters { str[i] = str[i] - 32; //converting to uppercase from lowercase } i++; } printf("The string in upper case: %s\n", str); }
Output:
Enter some text of your choice: Where have YOU been ? The string in lower case: where have you been ? The string in upper case: WHERE HAVE YOU BEEN ?
Thus, we have seen various methods to perform case change operation. We also saw use of fgets and getline. Henceforth getline / getdelim will be used in all string programs. Also, key takeaway is always write a code that is compatible on all platforms. Approach 2 and 3 are examples for the same. Approach 1 is restricted to Microsoft C library.
Report Error/ Suggestion
|
https://www.studymite.com/c-programming-language/examples/changing-case-of-strings-in-c/
|
CC-MAIN-2021-39
|
refinedweb
| 810
| 63.39
|
stdnamepsace. It's defined in
<stdlib.h>. It's a call to the Operating System. It's platform dependent. It can be a gateway for malicious code. Why go through that trouble when all you could do instead is wait for a newline or for the user to press any key?
#include <cstdlib>and don't put
std::in front of it, even if you're not
using namespace std;, so it's only kind of in the
stdnamespace because it's part of C. I will retract and call it "standard C++", but it shouldn't be in such wide, "standard" usage.
system("PAUSE");.
system()somehow fear it'll end up being used somewhere other than school homework. Like a real, practical program written by an advanced programmer. Get real.
|
http://www.cplusplus.com/forum/lounge/49322/
|
CC-MAIN-2015-48
|
refinedweb
| 131
| 75.2
|
Smart Dog Kennel Containment Electric Fence Wires Pet Training Products KD-660
US $9.5-35.0 / Piece
1 Piece (Min. Order)
Portable folding Dog House Cat and dog winter bed
US $4.55-4.8 / Piece | Buy Now
30 Pieces (Min. Order)
High quality wholesale low price large outdoor fence dog kennel/wireless dog fence for sale
US $24.99-31.99 / Pieces
100 Pieces (Min. Order)
Wireless outdoor electric dog training fence -026system portable fences for dogs
US $30-35 / Set
10 Pieces (Min. Order)
Fine quality hot sell puppy training pads dog pee cozy pet kennel mat pad
US $2.3-15 / Piece
100 Pieces (Min. Order)
pet cage with fences
510 Sets (Min. Order)
electronic pet dog cage bars of the fence , S-228 Teddy small and medium-sized dog kennel supplies of the fence
US $18.0-26.0 / Piece | Buy Now
1 Piece (Min. Order)
Portable HP400 Hyperbaric Oxygen Dog Cages Supplies For Pet Training Equipment On Sale
US $2000-2500 / Set
1 Set (Min. Order)
dog kennels Three tones WIN-10005 how to train a puppy with a clicker
US $0.4-0.7 / Piece
5000 Pieces (Min. Order)
import to thailand dog cage Pet Pad
US $0.02-0.03 / Piece
30000 Pieces (Min. Order)
Top Quality Customize Portable Dog Fence Kennels
US $30-35 / Set
1 Set (Min. Order)
Enchante Accessories Dog Kennel Fence Panel Bed New Technology
US $15-18 / Piece
1 Piece (Min. Order)
hot galvanized temporary mesh fence for dog
US $19.5-38.5 / Piece
10 Pieces (Min. Order)
China made hot galvanized outdoor portable dog proof fence
US $245-425 / Set
50 Sets (Min. Order)
netting system Smart Dog In-ground DF-113R wireless dog fence cage
US $28.12-76.96 / Set
1 Set (Min. Order)
Dog Kennels Wireless Vibrator Sleep Trainer
US $62.48-64.71 / Piece | Buy Now
10 Pieces (Min. Order)
Hot Wire Dog Fence Iron Fence Dog Kennel Electric Dog Fence
US $17.556-23.822 / Piece
1000 Pieces (Min. Order)
pet training tunnel,play tunnel,outdoor play tunnels
US $15-20 / Piece
1000 Pieces (Min. Order)
A-200 Smart Electronic wire mesh fencing dog kennel
US $0.01-40 / Unit
100 Units (Min. Order)
Hight Quality !!! Wireless Dog Fences AT-216F dog fence kennel
US $50-100 / Set
1 Set (Min. Order)
10 Acres Electric Fences For Kennels/Dog/Pet
US $39.0-41.0 / Set | Buy Now
1 Set (Min. Order)
cheap chain link dog kennels
US $39.9-99.9 / Box
50 Boxes (Min. Order)
Popular Electrical Dog Kennel Fence Panel for Training
US $9.5-35.0 / Piece
1 Piece (Min. Order)
Hot Wire Dog Fence Dog Kennel Electric Dog Fence A-200
US $26.8-32.8 / Piece
10 Pieces (Min. Order)
Various styles attractive fashion pet training pad pet kennel mat pad cover
US $2.3-15 / Piece
100 Pieces (Min. Order)
new material china wireless pet fencing 023 dog cage fence
US $20-26 / Set
50 Pieces (Min. Order)
Designer Best Sell Hot Wire Dog Fence Kennel
US $30-35 / Set
1 Set (Min. Order)
Trainertec temporary kennels DF-113R beautiful wireless dog fence
US $28.12-67.96 / Set
1 Set (Min. Order)
Iron Protable Wireless Invisible Electric Fence For Dog Kennel 2016 Hot Selling X800
US $28-35 / Piece
1 Piece (Min. Order)
United States popular outdoor temporary dog fence
US $245-425 / Set
50 Sets (Min. Order)
Top Selling Gadgets Dog Kennels Electric Shock Wireless Vibrator Vibrating Dog Collar
US $25-30 / Piece
1 Piece (Min. Order)
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show cage dog training or other products of your own company? Display your Products FREE now!
Related Category
Product Features
Supplier Features
Supplier Types
Recommendation for you
related suppliers
related Guide
|
http://www.alibaba.com/countrysearch/CN/cage-dog-training.html
|
CC-MAIN-2017-04
|
refinedweb
| 650
| 77.84
|
No argument constructor …
In Java is a good practice to initialize the initial capacity of collections and maps.
Many developers (me included) have the habit to declare a new Collection or Map using the no argument constructor, e.g.:
Map exampleMap = new HashMap();
With this instruction Java initializes a new HashMap object with the attribute
loadFactor at 0.75 and the
DEFAULT_INITIAL_CAPACITY at 16.
The HashMap stores internally his values in an array of HashMap$Node objects (at least until when the size doesn’t become too big).
The initialization doesn’t create yet the array, it will be instantiated only with the first insert in the map (e.g. using
put),
Java will create the internal array with something like:
Node<K,V>[] tab = (Node<K,V>[])new Node[16].
… it will grow
Every time an entry is added into the Map, the HashMap instance checks that the number of values contained in the bucket array is not more than his capacity multiplied the load factor (default at 0.75). In our case : 16 (capacity) * 0.75 (load factor) = 12 (threshold).
What happens when the 13th value is inserted in the array? The number of entries in the array is more than the threshold and the HashMap instance calls the method:
final Node<K,V>[] resize().
This method creates a new array of
Node with a capacity of the current store (16) * 2:
(Node<K,V>[])new Node[32]
The values of the current bucket array are ‘transferred’ in the new array, the new threshold is also multiplied * 2.
The table shows how the the size of the bucket array grows adding new entries.
The rehashing done in
resize() requires computational power and should be avoided if possible.
defining the initial size in the constructor
You have a defined number or entries or you know what should be the number of values that the map will contain, then it’s recommended to set thee ‘initial capacity’ accordingly.
Example you will have 100 entries and not one more?
new HashMap(100) will generate a bucket array of only 100 nodes without the need of rehashing during the insert.
If the initial capacity is defined using the constructor the size of the threshold is calculated using the following algorithm:
/** * Returns a power of two size for the given target capacity. */ static final int tableSizeFor(int cap) { int n = -1 >>> Integer.numberOfLeadingZeros(cap - 1); return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; }
… the result is 128.
If your map won’t grow more than the initial size, there won’t be rehashing of your data.
This solution is optimal if you know the maximum size of the HashMap.
General tip from the code source.
JDK version used for the tests
OpenJDK version 11.0.1
|
https://javaee.ch/java-hashmap
|
CC-MAIN-2019-30
|
refinedweb
| 464
| 63.59
|
In my prior post on Vala’s language features, I discussed enums and how I appreciated Vala’s implementation of them. I feel that Vala’s enums straddle an interesting line of utility and pragmatism. It took me a while to learn about their features, partially because documentation has been sparse (but is getting better) and partially because as a C / C++ / Java programmer, I’d had hammered into me a set of expectations about enums that Vala didn’t quite adhere to. (I had a similar learning curve, for many of the same reasons, about Vala’s lock keyword.)
Learning interface in Vala was a similar experience. Consider the Java Language Specification’s opening statement about interfaces, which Vala’s interfaces look to be a descendant of:
An interface declaration introduces a new reference type whose members are classes, interfaces, constants and abstract methods. This type has no implementation, but otherwise unrelated classes can implement it by providing implementations for its abstract methods.
Compare to this statement in the C# Programmer’s Guide:
Interfaces consist of methods, properties, events, indexers, or any combination of those four member types. An interface cannot contain constants, fields, operators, instance constructors, destructors, or types. It cannot contain static members. Interfaces members are automatically public, and they cannot include any access modifiers. … The interface itself provides no functionality that a class or struct can inherit in the way that base class functionality can be inherited.
What’s interesting about Java and C# interfaces is what they can’t provide: implementation. Interfaces are often touted as a sane solution to the problems surrounding multiple inheritance, but it always struck me as odd (and a bit of a blind-spot on the part of their boosters) that interfaces provide no reusable code. After all, isn’t that the name of the game? Especially for a technique that’s replacing a form of inheritance, which is all about reusable code?
(I have a pet theory that if one was to study the history of the development of software development technologies — languages, tools, paradigms, all of it — it’s primarily a history of reusable code. How much money and manpower has been dumped into this holiest of Holy Grails: write once, debug a bit more, reuse a billion times.)
Like enum and lock, Vala offers an interesting interpretation of interface with a couple of surprises. I’m not sure how much of it is due to the mechanics of GTypeInterface, Vala’s influences from Eiffel, or simply a reflection of Jürg’s vision, but it’s cool all the same.
Let’s start with a simple Vala interface that looks familiar to any Java or C# programmer:
interface Agreeable { public abstract bool concur(); }
In-the-know Java programmers will say that the public and abstract keywords are allowed but not required; in C#, they’re simply not allowed. But in Vala, neither are optional:
interface Agreeable { bool concur(); }
produces this compiler output:
error: Non-abstract, non-extern methods must have bodies
That seems to suggest that non-abstract, non-extern methods in interfaces can have bodies (“must implies can”, kind of a programmer’s variant of Kant’s argument from morality).
And sure enough, this will compile:
interface Agreeable { public bool concur() { return true; } }
What’s going on here? Simple: reusable code.
Vala interfaces are much more than hollow declarations of required methods. Interfaces are full-fledged members of the object hierarchy, able to provide code to inheriting classes. The reason an interface is not an abstract class is that an interface has no storage of its own, including a vtable. Only when an interface is bound to a class (abstract or concrete) is storage allocated and a vtable is declared. In other words, while the above is legal, this is not:
interface Agreeable { bool yep = true; public bool concur() { return yep; } }
gets you this:
error: Interfaces may not have instance fields
Bingo. That boolean is a member instance, which requires instance storage, which an interface does not have by itself.
So what good is allowing an interface to provide reusable code if it has no fields of its own? There’s a number of patterns of use, in particular Facade-style methods. For example, an interface could declare a handful of abstract primitive methods and offer a helper method that uses them in concert according to contract:
interface Agreeable { public abstract bool concur(int i); public abstract string explanation(bool concurred); public void process(int i) { if (concur(i)) stdout.printf("Accepted: %d %sn", i, explanation(true)); else stdout.printf("Not accepted: %d %sn", i, explanation(false)); } } class OnlyOdds : Agreeable { public bool concur(int i) { return (i & 0x01) == 0x01; } public string explanation(bool concurred) { return concurred ? "is odd" : "is not odd"; } } class OnlyMultiplesOfTen : Agreeable { public bool concur(int i) { return (i % 10) == 0; } public string explanation(bool concurred) { return concurred ? "is a multiple of ten" : "is not a multiple of ten"; } }
First, note that the implementations of concur() and explanation() in the classes don’t use the override keyword even though you must use the abstract keyword in the interface. I’m not sure of the reasoning, but so it goes.
Also know that virtual methods, signals, and virtual signals have their own peculiarities with interfaces. I’ll deal with them in another post.
So, a pretty contrived and very silly example, but notice how process() understands Agreeable’s interface and contract and hides those details behind a single method call. This is useful.
Going back to those language specifications earlier, remember that Java and C#’s interfaces cannot contain static methods. In Vala they can:
interface Agreeable { /* ... */ public static void process_many(Agreeable[] list, int i) { foreach (Agreeable a in list) a.process(i); } }
This allows for binding aggregator-style code with the interface itself, rather than building utility namespaces or classes. Again, this is useful.
However, if the following code is written:
Agreeable[] list = new Agreeable[2]; list[0] = new OnlyOdds(); list[1] = new OnlyMultiplesOfTen(); Agreeable.process_many(list, 10);
you get this compiler error:
error: missing class prerequisite for interface `Agreeable', add GLib.Object to interface declaration if unsure
What’s this about?
It’s due to another freedom in Vala that is lacking in Java and C#. Vala classes don’t have to descend from a single master class (i.e. Object). Unlike the other two languages, if a Vala class is declared without a parent, there is no implicit parent; Vala registers the class with GType as a fundamental type. If you don’t know what that means, read this. You probably still won’t know what that means, however.
Because Agreeable is declared without a prerequisite class, Vala can’t produce the appropriate code to store it in a tangible data structure, in this case, an array. (Update: As Luca Bruno explains in the comments, this is because of Vala’s memory management features.) This solves the problem:
interface Agreeable : Object {
What this means is that any class that implements Agreeable must descend from Object (i.e. GObject), meaning we need to change two other lines in the code:
class OnlyOdds : Object, Agreeable { class OnlyMultiplesOfTen : Object, Agreeable {
Although Agreeable now looks to descend from Object, it does not. It merely requires an implementing class to descend from Object. (A subtle difference.) Interfaces can also require other interfaces, and like classes, it can require any number of them:
interface Agreeable : Object, Insultable, Laughable {
Like requiring Object, this means that any class implementing Agreeable must also implement the other interfaces listed (Insultable, Laughable). This does not mean that Agreeable must implement those interfaces’ abstract methods. In fact, it can’t, one place where code reuse can’t occur.
Prerequisites also mean that Agreeable’s code can rely on those interfaces in its own code, and therefore can do things like this:
interface Agreeable : Object, Insultable, Laughable { /* ... */ public void punish(int i) { if (concur(i)) laugh_at(); else insult(); } }
… where laugh_at() is an abstract member of Laughable, insult() is an abstract member of Insultable, and of course concur() is its own abstract member. In other words, because Agreeable knows it’s also Insultable and Laughable, it can treat itself as one of them.
It’s easy to go crazy with interfaces, prerequisites, and helper methods, but most great languages have their danger zones of excess and abuse — features that are the hammer that makes everything look like a nail. Still, I think code reuse is the most important goal of any programming technology — language, tool, or paradigm — and I’m glad Vala has given it some thought in terms of interface.
|
https://blogs.gnome.org/jnelson/category/hacking/page/2/
|
CC-MAIN-2021-39
|
refinedweb
| 1,434
| 51.58
|
Contents
I gave an introduction to variadic templates last week. Today I will talk about some more features that have or will be added in that area in C++11, 14 and 17.
The sizeof… operator
The
sizeof... operator is a special form of pack expansion. It simply returns the number of pack elements and works on both template parameter packs and function parameter packs:
template <class... Ts> void printCount(Ts... args) { std::cout << sizeof...(Ts) << ' ' << sizeof...(args) << '\n'; } // prints "3 3\n" printCount(22, std::optional{0}, "!");
Note that, like
sizeof, the
sizeof... operator returns a compile time constant.
Variadic function templates: working on every argument
There are basically two ways to work on function parameter packs: work on everything at once using pack expansion, and recursively calling the same function, chewing off one argument at a time.
Pack expansion tricks
Sometimes we just want to call a function for each argument. However, pack expansion works only in places where comma separated lists are allowed. This is not a comma separated list, obviously:
doSomething(arg1); doSomething(arg2); ... doSomething(argN);
So it’s not surprising that this won’t compile:
template <class... Args> void doSomethingForAll(Args const&... args) { doSomething(args)...; }
Luckily, we have
std::initializer_list, so often it can be sufficient to use them as the place for the expansion:
template <class... Args> void doSomethingForAll(Args const&... args) { auto x = {doSomething(args)...}; }
This will make
x to be an
std::initializer_list of whatever
doSomething returns. However, since that might be
void or a mix of different types, it won’t always compile. A trick then is to create an expression as the expansion pattern that calls the function but has always the same non-void type. Using the comma operator, this is easy:
template <class... Args> void doSomethingForAll(Args const&... args) { auto x = {(doSomething(args),0)...}; }
Now, the function calls are just a side effect, and the result will be a
std::initializer_list<int> filled with zeros. The compiler will warn about the unused
x, but since we now know the type of the list and the fact that it’s unused, we can cast it to
void to silence the warning. We can do the same in case a static analyzer eagerly warns about the unused
doSomething returns or worse, if the
doSomething return type has overloaded
operator,:
template <class... Args> void doSomethingForAll(Args const&... args) { (void)std::initializer_list<int>{ ((void)doSomething(args),0)... }; }
One argument at a time
Suppose we want to print all the arguments of our function, separated by commas, to
cout. We could use the above trick, with
doSomething being a function that prints the value plus a comma. The problem then is with the last argument which should not be followed by a comma, but
doSomething won’t be able to detect that. The straight forward approach is recursion:
template <class Arg> void print(Arg const& arg) { std::cout << arg; } template <class Head, class... Tail> void print(Head const& head, Tail const&... tail){ std::cout << head << ", "; print(tail...); }
Whenever we call
With C++17 we have constexpr if and can reduce this function a bit:
template <class Head, class... Tail> void print(Head const& head, Tail const&... tail){ std::cout << head; if constexpr(sizeof...(tail) > 0) { std::cout << ", "; print(tail...); } }
Here, the body of the
if will only be compiled if
tail contains at least one element. Without constexpr if this would result in a compile error since the compiler would not find the appropriate
As always, any recursion can be converted into an iteration – which for variadic templates is pack expansion:
template <class Head, class... Tail> void print1(Head const& head, Tail const&... tail){ std::cout << head; (void)std::initializer_list<int>{((std::cout<<", "<<tail),0)...}; }
Nested packs expansion
I already had written about the simultaneous expansion of multiple packs, if they appear in the same expansion pattern. Something that might look similar at first sight is the expansion of nested packs: We can have a pack expansion pattern as part of another pack expansion pattern.
In such a case, the innermost pattern is expanded first, including simultaneous expansion of all the contained packs. Then the resulting outer pattern containing the expanded inner pattern is expanded and so on.
template <class T, class... Args> auto pairWithRest(T const& t, Args const&... args) { return std::make_tuple(std::make_pair(t, args)...); } template <class... Args> auto selfCartesianProduct(Args const&... args) { return std::tuple_cat(pairWithRest(args, args...)...); } auto cp = selfCartesianProduct(1, "!", 5.0);
In this example,
pairWithRest is a simple variadic template function with a normal pack expansion of
std::make_pair(t, args).... It returns a tuple of pairs. The interesting part is the call of that function in the
selfCartesianProduct function:
pairWithRest(args, args...)....
Here, the inner pattern is simply
args.... During the example call, this gets expanded to
1, "!", 5.0, obviously. The outer pattern after that is
pairWithRest(args, 1, "!", 5.0)..., which then gets expanded to
pairWithRest(1, 1, "!", 5.0), pairWithRest("!", 1, "!", 5.0"), pairWithRest(5.0, 1, "!", 5.0).
This results in three tuples of pairs which then get concatenated via
tuple_cat.
Fold expressions
With C++17 we get a nice new feature for function parameter packs. Imagine if you wanted to concatenate an expanded pattern not by a comma separated list but by using an operator. That’s what C++17’s fold expressions are for:
template <class... Args> bool containsZero(Args const&... args) { return ((args == 0) || ...); }
Here, the return expression is equivalent to
((args1 == 0) || (args2 == 0) || ... || (argsN == 0)). We can use a lot of binary operators in fold expressions, and they come in slightly different variants:
- Unary right fold:
(args + ...)is equivalent to
(args1 + (args2 + ( ... + argsN))). If args is an empty pack, this is ill-formed for any operators but
||,
&&and
,which will yield
false,
trueand
void(), respectively.
- Binary right fold:
(args * ... * X)is equivalent to
(args1 * ( ... * (argsN * X)), where X is some expression that is not a parameter pack. If args is empty, this evaluates to
X.
- Unary left fold:
(... | args)is equivalent to
(((args1 | args 2) | ... ) | argsN), i.e. like unary right fold, but with left association. The restrictions of unary right fold apply.
- Binary left fold:
(X > ... > args)is equivalent to
(((X > args1) > ... ) > argsN). (Yes, this will seldomly make sense…)
Variadic variadic template template parameters
No, I am not stuttering. I am presenting this only as a treat and won’t go too deep into it. We have template template parameters, i.e. template parameters that are themselves templates:
template <template <class A, class B> class Container> Container<int, double> f(int i, double d) { return Container<int, double>{i,d}; } f<std::pair>(1, 2.3); //returns std::pair<int, double> f<std::tuple>(1, 2.3);//returns std::tuple<int, double>
Of course, we can have variadic templates where the parameters are templates:
template <template <class A, class B> class... Containers> //...
Or templates, where the parameters are variadic templates:
template <template <class... As> class X> //...
Well, we can combine the two!
template<class F, template<class...> class... T> struct eval { F const& fun; eval(F const& f) : fun(f) {} auto operator()(T<int, double> const&... t) { return fun(t...); } }; using PID = std::pair<int, double>; using TID = std::tuple<int, double>; int f(PID const&, TID const&) { return 22; } int main() { eval<decltype(f), std::pair, std::tuple> x(f); auto xret = x(std::make_pair(22, 34.7), std::make_tuple(11, 47.11)); std::cout << xret << '\n'; }
I think that should be enough for today 😉
12 Comments
Permalink
Hi Arne,
This is a great post — thanks! Just noticed a small error (unless I am missing something). In your first variadic variadic template template parameters example, shouldn’t that be ‘Container’ with a capital ‘C’ in the line that reads
return container<int, double>{i,d};
Thanks!
Permalink
Fixed it, thanks!
Permalink
I am a bit puzzled by the the first example of the “Pack expansion tricks” section. As I see it, this expansion:
doSomething(args)…;
ought to expand into a comma separated list of function calls, which is valid:
doSomething(arg1), doSomething(arg2), doSomething(arg3);
Of course I am missing something, but what?
Thank you for some interesting posts!
Permalink
Hi Niels, thanks for asking!
The problem is that pack expansion is only allowed in certain locations, e.g. initializer lists, braced initializers, template parameter lists, function argument lists, lists of base classes,… – basically the special case of
sizeof...and anything that already is a list context.
This is, however, not the case for a simple statement in a function, so, even though a comma separated list of function calls would happen to compile (because we have the comma operator), it is not allowed as a pack expansion location.
Permalink
Permalink
Hello Arne
thank you very much for one more nice post!
just a question:
template void doSomething(T t) {
std::cout << t << 'n';
}
template void doSomethingForAll(Args... a) {
(void)std::initializer_list{(doSomething(a),0)...};
}
in this sample from your post, can doSomething be replaced with a lambda? if – yes, please example for a valid syntax.
Permalink
I would say “search&replace” so:
([](Args a) {}, 0)...
Permalink
silly me. just got a compiler and tried, I ended up with:
([](auto x){ print(x); }(args), 0)...)
Permalink
Yup, that’s like it. I’d probably use a named lambda to improve readability. The expansion then remaisn the same:
template void doSomethingForAll(Args... a) {
auto doSomething = [](auto t){ std::cout << t << 'n'; };
(void)std::initializer_list{(doSomething(a),0)...};
}
Permalink
the notation is not enough clear, could you explain how it’s recognized (by compiler) :
[](auto x){ print(x); }(args)
Permalink
First:
[](auto x){ print(x); }is recognized as a lambda. Since it is not assigned to anything this expression creates a temporary lambda object or closure object, to use the right name.
Second:
<temporary_closure_object>(args)is then recognized as a use of the function call operator on that temporary closure object.
Does that answer the question?
Permalink
yes, thank you.
just didn’t face with such usage of lambda earlier
(args)
|
https://arne-mertz.de/2016/11/more-variadic-templates/
|
CC-MAIN-2018-34
|
refinedweb
| 1,675
| 59.19
|
When I'm finished working with object I'd like to deselect it to not make any accidental changes. But it drives me nuts, that such a simple procedure takes extra time. I know 3 ways to deselect a selected objects, but I want to find more faster and simpler method.
Find a sky and click it. Disadvantages: Mostly it takes to much time and changes your working position
Shift + LMB on selected object. Disadvantages: You can click something else and add it to selection and you need to aim at that object which also takes time.
Click on empty space in project section. Disadvantages: The content of opened folder changes, so you still need to aim and if the opened folder is full of assets it getting hard to click at empty space. + My Project tab is on the second monitor, so I need to waste a lot of time on mouse dragging.
I really hope that there is a key combination for deselect or some easyer methods to do this.
Answer by iDeally
·
Apr 14, 2018 at 07:09 AM
Create a new .cs script in the "Editor" folder. Then put the following code there:
using UnityEngine;
using UnityEditor;
public class EditorShortCutKeys : ScriptableObject
{
[MenuItem("GameObject/Deselect all %#D")]
static void DeselectAll()
{
Selection.objects = new Object[0];
}
}
Now you can press ctrl + shift + d to deselect all. If you wish to change howkey you should edit a "MenuItem" attribute. More info is here:
This almost works for iOS. What should i change in this script for "command+shift+D"?
As I see in docs, you should not change anything. It should work with provided example. Does it not?
It "mostly" works. The %# does not translate to control+shift though, so I can't figure out how to make a hotkey for it on mac.
Answer by Acr1m
·
May 16, 2014 at 09:16 PM
Yeah, this is a really annoying issue because a "Deselect All" command is easily found in most every major software :/
The cleanest and clearest way to deselect things in Unity is by Ctrl+LMB clicking the object in the "Hierarchy Window". (If you double click too fast, it will focus your Scene window on that object, so be careful.)
Another thing you can do is make a new script file and paste in the code found on THIS page. It will put a command in the top menu tabs of Unity, but I don't think it will add a hotkey, so do get too excited :/
Both of the linked scripts attempt to register a hotkey (as described in the MenuItem documentation). One uses "_a", which is just the A key; the other uses "#a" which is Shift+A.
Answer by maltakereuz
·
Feb 10, 2018 at 05:37 AM
It's really goes on my nerves, so here my workaround:
Ctrl + Shift + N (in Hierarchy) to create new dummy object
Select it (and only it) with mouse or somehow
Ctrl + Alt + 1 (add it to selection group 1)
Eleminate mercylessy dummy object
Now you can use Ctrl + Shift + 1 to deselect all (technically select nothing or rip dummy more accurate)
not very nice, but better als scrolling out scene to find an empty space to click each time why it can not be just Ctrl + Shift + A? Dear unity dev team...
Wow. Its.
Deselect a selected object?
2
Answers
Instantiate in Editor
1
Answer
How do I print out which folder in the projects tab is currently selected ?
1
Answer
Find all object in Hierarchy
1
Answer
How to get component from an Object
2
Answers
|
https://answers.unity.com/questions/639981/quicky-deselect-all-selected-object-in-editor.html
|
CC-MAIN-2019-39
|
refinedweb
| 601
| 69.82
|
I am asking this here, because it is not really an issue but a usage question. Let me know if you prefer this on github issues.
If I want to register callback functions for certain events in napari, what is the correct way to do it?
For mouse callback functions you provide a decorator-style interface, e.g.
@viewer.mouse_drag_callbacks.append that wraps the function.
Now I want to create a callback function that is called whenever say a layer is added to the viewer.
I could not find a decorator functionality for this, but digging around in the source code and exploring with the ipython console I found that I can do this:
def handler(arg): print("type", arg.type) print("source", arg.source) print("index", arg.index) print("native", arg.native) viewer.layers.events.emitters['added'].connect(handler)
which seems to do what I want. Now I’m not sure whether this is the intended way to or whether there is also a decorator-style way to register callbacks?
|
https://forum.image.sc/t/registering-callback-functions-for-events-in-napari/32210
|
CC-MAIN-2020-40
|
refinedweb
| 171
| 67.25
|
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python.
from sympy import * init_printing()
import sympy.ntheory as nt
Test whether a number is prime.
nt.isprime(2011)
Find the next prime after a given number.
nt.nextprime(2011)
What is the 1000th prime number?
nt.prime(1000)
How many primes less than 2011 are there?
nt.primepi(2011)
We can plot $\pi(x)$, the prime-counting function (the number of prime numbers less than or equal to some number x). The famous prime number theorem states that this function is asymptotically equivalent to $x/\log(x)$. This expression approximately quantifies the distribution of the prime numbers among all integers.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline
x = np.arange(2, 10000) plt.plot(x, list(map(nt.primepi, x)), '-k', label='$\pi(x)$'); plt.plot(x, x / np.log(x), '--k', label='$x/\log(x)$'); plt.legend(loc=2);
Let's compute the integer factorization of some number.
nt.factorint(1998)
2 * 3**3 * 37
A lazy mathematician is counting his marbles. When they are arranged in three rows, the last column contains one marble. When they form four rows, there are two marbles in the last column, and there are three with five rows. The Chinese Remainer Theorem can give him the answer directly.
from sympy.ntheory.modular import solve_congruence
solve_congruence((1, 3), (2, 4), (3, 5))
There are infinitely many solutions: 58, and 58 plus any multiple of 60. Since 118 seems visually too high, 58 is the right answer.
You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).
IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).
|
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter15_symbolic/05_number_theory.ipynb
|
CC-MAIN-2017-47
|
refinedweb
| 301
| 68.97
|
randomguy123451Member
Content count25
Joined
Last visited
Community Reputation105 Neutral
About randomguy123451
- RankMember
any reason that heirarchies can not be registered for value types
randomguy123451 replied to randomguy123451's topic in AngelCodeha..ha.. Actually my bad. It was a typing mistake. I do not want the "ZFraction" and "ZIrrational" to be a subclass of "ZInteger". I want the "ZFraction" and "ZIrrational" to be a subclass of "Real". I have edited and corrected my original question. We keep ZFraction , because we want to store say 1/3 instead of 0.3333333333..... but many Irrational do not have fraction representations, they may be quadratic surds etc. Actually, I want to develop a object oriented scripted framework for high speed " mathematical computation", as it needs to be very high speed hence i am not doing it in java. I want to do it ALL in C++ classes, but only the interfaces will be exposed to angelscript. The reason i chose angelscript has nothing to do with static typing or feature x or feature y . [b]i chose angelscript because it lets me define [u]"my own" types(ie, value types) which will exist on "stack", not on heap, this will give me less cost to speed performance compared to heap approach[/u]. This (custom-classes on stack) is not possible in almost any of the C like syntax based scripting languages for C++, it seems to be possible only in angelscript.[/b] But on the above said url in my question , when i was preliminarily reading . It says that if i make them value types then i can not make object oriented heriarchies in them. [b]my heirarchy (ZFraction and ZIrrational are subclasses of ZReal . There are other classes in this heirarchy as well ZComplex, ZNatural ) etc. All will be implemented as C++ classes as proper parent and child classes.[/b] Now, you said in your reply that [i] "It's currently not possible to do reference casts for value types, but that doesn't mean you can't register both ZInteger and ZFraction even if they are base and subclass in C++."[/i] Now, from your reply i could not understand much, that by any workaround whether you are saying its possible: yes or no . If i expose ZFraction and ZIrrational and ZReal (originally implemtned in C++) to angelscript [u][b]as VALUE TYPE,[/b][/u] then are you saying that i can still expose child-parent relationship of ZFraction and ZReal to angelscript, even if they are value types, using some workaround? Any relevant url for this? sorry for the trouble, but its extremely important for me.
implicit data type conversion functions from Class A to class B
randomguy123451 replied to randomguy123451's topic in AngelCodeThanks a lot. :-)
implicit data type conversion functions from Class A to class B
randomguy123451 posted a topic in AngelCode---------- [url=""][/url] refer.” ------------- [b]Now, in angelscript:[/b] [b[/b]
any reason that heirarchies can not be registered for value types
randomguy123451 posted a topic in AngelCode[url=""][/url] mentions.
C#-style property getters/setters
randomguy123451 replied to InvalidPointer's topic in AngelCodethumbs up to "InvalidPointer" and "WitchLord" for incorporating this into angelscript. This brings angelscript a step closer to powerful languages. my theme for embedded language is to have all the object oriented, functional and other-developer friendly "features of the core language itself", BUT without the baggage of a large runtime library. properties are very important part of that. [img][/img]
- i am also going to consider dart language ( ) by GOOGLE considering that it will also be executable in a VM too (apart from javascript mode) not much details are out yet. main worry is what is the size of their VM? that i will have to look out.
- may i request you to keep a roadmap weboage on angelscript homepage( ) the goals link ([url=""][/url] ) there is different from upcoming/wishlist features , which will be a list of user's approved requests by you. may be version number and feature can be given, but **no timeline** of the version number, if you can't promise so. for example: [url=""][/url] This will atleast keep users interested by having an opinion that "though this language has no feature X, but may be developer is interested, so i can start integration of my classes today, so that whenever that feature comes i will be ready." OR he may have an alternative opinion that "Though this language has no feature X, and this is essential for me, hence i should look for some other scripting language". As in this thread in 2009 [url=""][/url] , you had said that namespace is on you todo list. MOST IMPORTANTLY: by having this roadmap web page , If somebody needs a feature X which is already is in your approved roadmap(todo list), then he may financially contribute you/angelscript so that you can implement that feature 'early;. Here is a sample roadmap like: in roadmap These features may be in core or as an addon namespace (version 3.1) exception (Version 2.9) autoboxing (version 2.9) closure (version 2.7) Not in roadmap: mixin multiple inheritance
- again thanks for the positive roadmap. Closures are very much really needed for scripting language , esp. They lead to lesser and lesser amount of code esp. in the GUI handling ,mathematical computation and maps,lists etc handling. Reduces the clutter and chances of bug by a lot. The source code with closure also seem to very natural and logical . [img][/img]
- and this (object oriented) mirroring is of the API only(which is still a large API) , not the classes behind the API.
- Thanks for positive roadmap [img][/img] My target users are developers themselves, to whom their higher technical management do not trust to let delve into the C++ pointer mismanagement, but rather a safe sandbox of small scripting language but still powerful object oriented enough. you can think it like this, many C++ API (or C API) are consumed by me, as i am a a good C++ (third generation language) developer; - but i am "not" apt at assembly language (1 layer lower), - i am also "not" good at database administrator role like "SQL (declarattive and fourth generation language) optimization" (1 layer higher). I am good at my own layer C++ in the same way , their project manager do not want to allow intern level /fresher/his lower level programmers the pointer mismanagement risk of C++, but still he trusts their object oriented instincts, as many of them are good ruby programers (due to Ruby on Rails hype). He also does not want ruby,python due to their non-C syntax and their large size overhead for their application. So angelscript / nullC (code.google.com/p/nullc/ ) / jewelscript ( [url=""][/url] ) will mean: no pointer mismanagement (for goodness of their fresher programmers coming from ruby and PHP side) still as much object orientation as possible (for goodness for their tech lead) C syntax safe sandbox Hence, here arises a "purely object oriented but still safer" need for [color=#1C2837][size=2]the script to mirror the application code in (almost) all aspects.[/size][/color]
- @ _orm_ : i am sorry if i asked many questions in a single day and that was inconvenient to you or anybody else. Actually, when i saw angelscript and nullc languages, i became enthusiastic, and i tried to decide on one of them into my application. As there were four-five different features essential for me in a scripting language, hence i tried to satisfy my urgency aggravated by my excitement for angelscript and nullc. I am sorry for that. But, you could have reminded/warned me about this at first, and if not followed by me even in next many days, then you could have said anything about me. But in the first instance you started personal comments on me, which is very sad. community around a language has to be built by general helping attitude (and that may be combined with humble discpilinary posts like "plz read the FAQ that you should not do this /that"). But if people get personal on the very first day on slightest anooyances, then it will be hard for the curious guests and they will go away , WHO COULD HAVE BEEN future contributing users/part of the community. Have you seen stackoverflow.com and how people are mostly contributing on the topic and "how good it is moderated with removed posts/replies with stated that - this post has been removed/edited due to this/that" . This serves well as a reminder for that user that he needs to change any bug in his action. But nowhere personal attack is tolerated there. "You post also could have been a candidate for removal by moderator." I hope that my point is well understood. Again, i am sorry if i had ruffled anybody or _orm_ by this post or past posts.
- I have lots of utility libraries which have full OO implementation (that is no C like error code, only exceptions of C++). These libraries exposed into angelscript should feel same to my end user , now he should not be dealing with error codes in angelscript, He should be able to extend these exceptions wih his own. So it will be good (for uniformity and no-confusion sake) that C++ and angelscript model of object heirarchy should be exactly same (in the object oriented sense) and the way of dealing it too should be the same. Second, i would like to ban the user from using normal int etc types, i want that all the types should be classes. So i will expose an Integer class for the same. though this will not give me -2.abs or -2.abs() like in groovy Integer i = new Integer(-2); (this may not be a strictly angelscript syntax, i am just trying to roughly express) i.abs() Hence, implementation of OPTIONAL TRY-CATCH exceptions and AUTOBOXING (int to Integer class and vice versa) will help me in making my scripting language almost fully object oriented/ even if not purely object oriented. Which is a must for me, due to almost all my client people's sole insistence on pure object orientedness as a policy. Though i am dying something like scala (which nicely integrates primitive types and object oriented types); but i will happily be contented with OPTIONAL TRY-CATCH exceptions and AUTOBOXING . Thats why i was evaluating angelscript with these concerns. One request: Whenever you implement it, like in groovy, do not make it mandatory to catch the exceptions or rethrow it. It must be "optional" [url=""][/url] [quote name='WitchLord' timestamp='1318291496' post='4871265'] I haven't implemented support for 'try/catch' blocks in the script language yet. It is still on my to-do list. Any particular reason why you want this? [/quote]
support for mixin
randomguy123451 replied to randomguy123451's topic in AngelCodeMixin has got functionalities (functions) implemented, whereas interfaces do not have. It is a way around in groovy etc , which already support interfaces, but do not support multiple inheritance; even then through mixin many of such use cases are met. but this mixin is such a thing, without which i can still live happily with the language. So not much worries on my part.
- For further conceptual information on "closure", kindly refere to these url closure in ruby [url=""][/url] closure in nullC ( lua like smallsize language) [url=""][/url] closure in groovy [url=""][/url] closure in C# [url=""][/url] so , whether angelscript currently or in future support closure? [quote name='randomguy12345' timestamp='1318224435' post='4870976'] S ( [url=""][/url] ) and angelscript and some other choices . and the closures and exceptions are my main humble requirements from the scripting language or should atleast be in the immediate roadmap. Thanks in advance[img][/img] [/quote]
support for closure
randomguy123451 posted a topic in AngelCodeS ( ) and angelscript and some other choices . and the closures and exceptions are my main humble requirements from the scripting language or should atleast be in the immediate roadmap. Thanks in advance[img][/img]
|
https://www.gamedev.net/profile/189935-randomguy12345/?tab=smrep
|
CC-MAIN-2018-05
|
refinedweb
| 2,013
| 59.94
|
Introduction Used Functionality across the system)
Separating Cross Cutting Concerns from business logic can be a major step towards writing an well designed decoupled code. Let us ponder over ways to deal with separating Cross Cutting Concerns
Inheritance
Inheritance pops up in the mind straight away, we can inherit the common functionality and use it in our objects. But Inheriting the common functionality requires us to design us a base class. If we reuse this class in multiple places then modifying the class could be a tough job later on.
Inheritance == Difficult to modify later(inelastic code)
Delegation
Delegation is a better way of dealing with the cross Cutting Concerns. Remember Composition over Inheritance, (delegation and composition share a common concerns). But Then we would have to make calls to delegate objects at many places thus making it cumbersome.
Delegation == Cumbersome
Aspect Oriented Programming
Does this mean we are in soup. Rather not, this leaves us with a third and the best approach of all, Aspect Oriented Programming. AOP spares us the brittleness of Inheritance and cumbersomeness of delegation. AOP shines in area of separting cross cutting concerns
What is AOP?
AOP allows us to modularize cross cutting concerns into special objects called Aspects, thereby creating a cleaner and decoupled code. With aspects in place objects no longer have to worry about performing the passive cross cutting concerns as AOP takes care of it all.
Terminologies related to AOP
Like any successful technologies AOP comes with its own sets of jargon and terminologies. Let us take a glance at those before moving to more serious business of understanding AOP.
- Concerns – These are the part of system modularized based on their functions. There are two types of concerns. 1. Core concerns 2. Cross Cutting concerns. Core concerns are related to the business logic of the system i.e active tasks that system performs like generating salary slip, getting employee record, doing bank transfer etc. Cross cutting concerns are the passive tasks that are required to perform the active tasks like logging, caching etc.
- Joinpoint – Joinpoint is a point in execution flow where some action takes place and a possibility arises to apply an Aspect(cross cutting concern). A joinpoint can be method being invoked, exception being thrown or change in state of an object.
- Advice – Each Aspect in AOP has a purpose i.e. the job it has to do. This job has to be applied at a Joinpoint. Job or purpose of an Aspect is called Advice. Apart from defining the job of aspect, Advice also defines the time when the Aspect is to perform the job. Should the job be applied before or after or both before and after a core concern finishes its execution.
- Pointcut – There can be many Joinpoints in the system, but not all are chosen to be advised by an Aspect. Aspect takes help from Pointcut to choose the Joinpoint where advise is to be woven.
- Aspect – Advice and Pointcut defines an Aspect. As we saw that Advice defines the job of an Aspect and when to perform it. While Pointcut defines the location where aspect weaves it advice. So what, when and where of a job defines the Aspect.
- Target – Target is the object which is being adviced. (Core Concern). With help of AOP this object is free to perform its primary task without worrying about cross cutting concerns.
- Proxy – When advice is applied to a target object a Proxy object is created. AOP container creates and manages the lifecycle of object and programmers need not worry about them.
- Weaving – Weaving is the process of applying Advice or Aspect to the target object to create the proxy object. Weaving can be done at compile time or classloading time or at runtime. Typically Spring AOP weaves aspect in the target object at runtime.
That’s a long list of terms to digest. Take your time in understanding them before moving on.
Types of Advice
One Final piece before indulging in an example is to learn about type of advice. Mainly there are 4 types of advice.
- Before Advice – Before advice is applied before the Joinpoint starts execution. BeforeAdvice is created by implementing org.springframework.aop.MethodBeforeAdvice interface. The method to be implemented is public void before(Method m, Object args[], Object target) throws Throwable
- After Returning Advice – After advice is applied after the Joinpoint completes executing. AfterReturningAdvice is created by implementing org.springframework.aop.AfterReturningAdvice interface. The method to be implemented is public void afterReturning(Method m, Object args[], Object target) throws Throwable
- Throws Advice – Throws advice is applied when Joinpoint throws an exception during execution.
- Around Advice – This advice surrounds the Joinpoint execution and is executed before and after Joinpoint execution. This can even be use to control the invocation of a Joinpoint.
Example
We will try to develop a simple cache with help of SpringAOP. Caching has three main core concerns.
Core Concerns
- Save object in Cache.
- Return object from Cache.
- Delete object from Cache.
Now apart from these core concerns caching framework has other passive task. These passive tasks forms the cross cutting concern.
Cross Cutting Concerns
- Re-sizing the cache when it reaches its size limit. (LRU) implementation.
- Locking an object to prevent from deletion when it is being read.
- Locking the cache to prevent and read/writes/deletes when it is getting re-sized.
Coding for all these cross cutting concerns can be time taking and tedious so let us simplify the example and we will just implement the re-size logic when the cache is full. So after example is done we will have a cache where we can put, get and delete objects. There is a max size of cache which has been set to 10 in example. Once the cache stores 10 object then any addition to the cache will result in deletion (re-sizing) of cache by deletion of first object. The re-sizing operation is controlled by an Aspect created using Spring AOP. Here are the steps to be followed in the example
Example Code Can be downloaded from SVN here:
- Dependencies – AOP is a core functionality of spring so to get Spring AOP going all we need are core spring jar so in your POM add following dependencies.
<dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency>
- Core Caching object.
package com.aranin.spring.aop; import java.util.Date; import java.util.LinkedHashMap; import java.util.Map; public class MyCache { private LinkedHashMap<String, Object> cacheMap = new LinkedHashMap<String, Object>(); private LinkedHashMap<String, Date> timeStampMap = new LinkedHashMap<String, Date>(); /** * defines the max size of hashmap */ private long maxsize = 10; //should come from properties file or some configuration /** * how long the object should be stored before it is evicted from cache */ private long objectLifeTime = 10000; private boolean lock = false; public LinkedHashMap<String, Object> getCacheMap() { return cacheMap; } public void setCacheMap(LinkedHashMap<String, Object> cacheMap) { this.cacheMap = cacheMap; } public LinkedHashMap<String, Date> getTimeStampMap() { return timeStampMap; } public void setTimeStampMap(LinkedHashMap<String, Date> timeStampMap) { this.timeStampMap = timeStampMap; } public long getMaxsize() { return maxsize; } public void setMaxsize(long maxsize) { this.maxsize = maxsize; } public long getObjectLifeTime() { return objectLifeTime; } public void setObjectLifeTime(long objectLifeTime) { this.objectLifeTime = objectLifeTime; } public boolean isLock() { return lock; } public void setLock(boolean lock) { this.lock = lock; } /** * This method is used to retrive the object from cache * @param key * @return */ public Object get(String key){ return this.getCacheMap().get(key); } /** * this method is used for putting an object in cache * @param key * @param object */ public void put(String key, Object object){ //get the curr date Date date = new Date(System.currentTimeMillis()); //set object in cacheMap this.getCacheMap().put(key,object); //put timestamp in cache this.getTimeStampMap().put(key, date); } public void delete(String key){ this.getCacheMap().remove(key); this.getTimeStampMap().remove(key); } public void clearAll(){ this.setCacheMap(new LinkedHashMap<String, Object>()); this.setTimeStampMap(new LinkedHashMap<String, Date>()); } /** * remove last 2 entries * not worried about object life time * this is just an example */ public void resize(){ System.out.println("inside resize"); long size = this.getCacheMap().size(); System.out.println("size + " + size); if(size == this.getMaxsize()){ System.out.println("max size has reached"); Map.Entry<String, Date> firstEntry = this.getTimeStampMap().entrySet().iterator().next(); System.out.println("removing : " + firstEntry.getKey() + " value : " + firstEntry.getValue()); this.timeStampMap.remove(firstEntry.getKey()); Map.Entry<String, Object> firstCEntry = this.getCacheMap().entrySet().iterator().next(); System.out.println("removing : " + firstCEntry.getKey() + " value : " + firstCEntry.getValue()); this.cacheMap.remove(firstCEntry.getKey()); } System.out.println("leaving resize with size : " + this.getCacheMap().size()); } }
There is nothing much to say about this class. There are two LinkedHashMaps one which stores the object and the other stores the timestamp when object was pushed in the cache. The max size is set to 10 and it has get, put and delete methods. Also there is a re-size method which will be called by Aspect as we will check later.
- Resize Advice
package com.aranin.spring.aop; import org.springframework.aop.MethodBeforeAdvice; import java.lang.reflect.Method; public class ResizeAdvice implements MethodBeforeAdvice { @Override public void before(Method method, Object[] args, Object target) throws Throwable { System.out.println("invoking " + method.getName() + " on " + target.getClass() + " Object"); if(method.getName().equals("put")){ System.out.println("before invoking " + method.getName()); ((MyCache)target).resize(); } } }
As you can see this is a method before advice. Class implements MethodBeforeAdvice interface which contains a single mthod before(). If you examine the method you will check that rezise method is called when ever we call a put method.
- Spring context springaopdemo.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <bean id="resizeAdvice" class="com.aranin.spring.aop.ResizeAdvice" /> <bean id="myCache" class="com.aranin.spring.aop.MyCache" /> <bean id="myAOPCache" class="org.springframework.aop.framework.ProxyFactoryBean"> <property name="target" ref="myCache" /> <property name="interceptorNames"> <list> <value>resizeAdvice</value> </list> </property> </bean> </beans>
If you notice the above xml file, both MyCache and ResizeAdvice have been registered as spring bean. The main bean in the file is myAOPCache. This is the proxy object that spring aop creates after applying the advice on the core class. The proxy object is created by ProxyFactoryBean class. We pass a reference of myCache object to the proxy object and also register all the advice which are to be applied to the proxy classes.
- Finally let check the Client which will help us running this demo.
package com.aranin.spring.aop; import org.springframework.context.ApplicationContext; import org.springframework.context.support.FileSystemXmlApplicationContext; public class MyCacheClient { public static void main(String[] args){ ApplicationContext springcontext = new FileSystemXmlApplicationContext("D:/samayik/SpringDemos/src/main/resources/springaopdemo.xml"); MyCache myCache = (MyCache)springcontext.getBean("myAOPCache"); myCache.put("1", "1"); myCache.put("2", "2"); myCache.put("3", "3"); myCache.put("4", "4"); myCache.put("5", "5"); myCache.put("6", "6"); myCache.put("7", "7"); myCache.put("8", "8"); myCache.put("9", "9"); myCache.put("10", "10"); System.out.println((String)myCache.get("1")); System.out.println((String)myCache.get("2")); System.out.println((String)myCache.get("10")); myCache.put("11", "11"); System.out.println((String)myCache.get("1")); System.out.println((String)myCache.get("2")); System.out.println((String)myCache.get("10")); System.out.println((String)myCache.get("11")); } }
In this class we start the spring container and load the beans present in spingaopdemo.xml. We push 10 objects in the cache and when we try to push the 11th object then the first one is deleted and 11th inserted. Output is big so I am not posting the output. Run the class and check the output to your satisfaction.
Summary
In this post we learnt how to better deal with cross cutting concerns using Aspect Oriented Programming. AOP is a powerful concept that allows us write cleaner and decoupled code. AOP does not provide any thing new. All it does is to segregate the business logic from other mundane tasks that system has to perform. It enables reuse of code implementing system wide cross cutting concerns. We also learnt the various terminologies associated with AOP. Last but not the least we saw a simple example where we created a simple before method advice using Spring AOP and applied it to manage our caching system.
Note
You can freely use and distribute the caching system developed in this code. Though using it in production system is not advisable.
As and always I intend this post as a launching platform for collective learning, feel free to drop in a comment or two about what you feel about AOP and how you to plan to use it in your code. Happy reading.
Excellent explanation of AOP concept and its terms. Great JOB !!!
|
https://www.javacodegeeks.com/2013/10/aspect-oriented-programming-with-spring-2.html?utm_content=bufferddac5&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer
|
CC-MAIN-2017-26
|
refinedweb
| 2,126
| 50.94
|
3 Gems in Mono for .NET Programmers – The hidden potential of Mono.CSharp, Mono.Cecil And Mono.TextTemplating
Mono is getting more mature. I should say, Mono guys are even outshining their Microsoft counterparts in some areas. For example, Mono’s C# compiler already supports Compiler as a Service. In this post, we’ll have a quick look at some interesting Mono libraries that you can use in your .NET projects, to take advantage of some of their functionalities, that is not present in the .NET stack.
1 - Mono.CSharp – For Compiler as a Service and C# as a scripting language
Anders in his C# Futures Talk mentioned about C# Compiler as a Service and demonstrated an REPL (Read Evaluation Print Loop) implementation. And few months back, I got pretty surprised when I found that Mono announced Compiler as a Service support. This will enable you to evaluate C# code on the fly, and more importantly, to use C# as a scripting language in your applications. Here is a step by step guide to this feature.
- Hosting Mono’s C# Compiler As a Service in .NET Apps
- Dynamic Filtering and Querying in .NET Applications
- Using C# as a scripting language
2 - Mono.TextTemplating – For T4 (Text Templating) Functionality in your .NET applications
In a lot of scenarios, a custom text template processor comes in handy. I remember putting together a simple, custom Text template processor some time back for generating Emails. With in Visual Studio, there is a Text Templating transformation engine built in, better known T4. It is used for a lot of scenarios, mainly for generating code from Models, and is used in various VS Packages for code generation purposes. You can also write your own Template files (*.tt) with in Visual Studio, and Clarius has a cool T4 Editor available for Visual Studio.
All is good. But what if you want to implement some templating functionality in your own applications? We can’t use the Microsoft.VisualStudio.TextTemplating libraries, because I don't believe that T4 can be legally redistributed without Visual Studio. But you can rely on Mono equivalent T4 implementation,Mono.TextTemplating
And, if you want to see an example, you should check out a Custom View Engine I developed for ASP.NET MVC using Mono.TextTemplating and the related example code. More than an ASP.NET MVC View Engine example, it is also an example of how to host the T4 Engine in your own applications.
3 - Mono.Cecil – For Modifying .NET Assemblies, And a better alternative to .NET Reflection
If you use .NET Reflection heavily, you should have a look at Mono.Cecil. You can also use Mono.Cecil to modify a compiled assembly, or to weave some code in to your assemblies (AOP) directly. To try out Cecil, get the latest version 0.9 from Github and compile Mono.Cecil library in VS2010.
Now, create a console application in Visual Studio, and try this example after adding a reference to Mono.Cecil.dll.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using Mono.Cecil; namespace CecilAppTest { class Program { static void Main(string[] args) { //Open a given assembly. Let us assume the //first command line param is the path to the assembly to load and inspect AssemblyDefinition assemblyDefinition = AssemblyDefinition.ReadAssembly(args[0]); //All Modules in this assembly foreach (var modDef in assemblyDefinition.Modules) { Console.WriteLine(" +--Module {0}", modDef.Name); //All types in this module foreach (var typeDef in modDef.Types) { Console.WriteLine(" +--Type {0}", typeDef.Name); //All methods in this type foreach (var methodDef in typeDef.Methods) { Console.WriteLine(" --Method {0}", methodDef.Name); } //All properties in this type foreach (var propDef in typeDef.Properties) { Console.WriteLine(" --Property {0}", propDef.Name); } } } } } }
Compile the above example, and try it against a .NET dll to see the output, by providing the target assembly’s path
The above example shows how to iterate through all modules, types, methods and properties in an assembly using Mono.Cecil. What is more interesting is, you can even modify the information and even save it back. Paul has a good post on using Cecil to obfuscate your .NET assemblies. Also, have a look at his NCloak obfusctor, it is open source, that is built leveraging Cecil’s power.
Also, check out my C# AccessPrivateWrapper where I explore how to use reflection to access private classes, variables etc.
|
http://www.amazedsaint.com/2010/10/monocsharp-monocecil-and.html
|
CC-MAIN-2018-51
|
refinedweb
| 730
| 60.31
|
This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: Django 1.0
Sending e-mail¶
Although Python makes sending e-mail relatively easy via the smtplib library, Django provides a couple of light wrappers over it, to make sending e-mail extra quick.
The code lives in a single module: django.core.mail. setting controls whether a secure connection is used.
Note
The character set of e-mail sent with django.core.mail will be set to the value of your DEFAULT_CHARSET setting.
send_mail()¶
The simplest way to send e-mail is using the function django.core.mail.send_mail(). Here's its definition:()¶
django.core.mail.send_mass_mail() is intended to handle mass e-mailing. Here's the definition:' "To:" field. e-mail to the site admins, as defined in the ADMINS setting. Here's the definition:
mail_admins() prefixes the subject with the value of the EMAIL_SUBJECT_PREFIX setting, which is "[Django] " by default.
The "From:" header of the e-mail will be the value of the SERVER_EMAIL setting.
This method exists for convenience and readability.
mail_managers() function¶
django.core.mail.mail_managers() is just like mail_admins(), except it sends an e-mail to the site managers, as defined in the MANAGERS setting. Here's the definition:
Examples¶¶ message.
Here's an example view that takes a subject, message and from_email from the request's POST data, sends that to admin@example.com and redirects to "/contact/thanks/" when it's done:
from django.core.mail import send_mail, BadHeaderError and SMTPConnection classes¶. If the keyword argument fail_silently is True, exceptions raised while sending the message will be quashed. e-mail,Connection Objects¶)
Testing e-mail sending¶
The are times when you do not want Django to send e-mails at all. For example, while developing a website, you probably don't want to send out thousands of e-mails -- but you may want to validate that e-mails will be sent to the right people under the right conditions, and that those e-mails will contain the correct content.
The easiest way to test your project's use of e-mail is to use a "dumb" e-mail more entailed testing and processing of e-mails locally, see the Python documentation on the SMTP Server..
|
http://docs.djangoproject.com/en/dev/topics/email/
|
crawl-002
|
refinedweb
| 383
| 57.87
|
is one of the most frequently used functions in C for output. ( we will discuss what is function in subsequent chapter. ).
Try following program to understand printf() function.
#include <stdio.h>
main()
{
int dec = 5;
char str[] = "abc";
char ch = 's';
float pi = 3.14;
printf("%d %s %f %c\n", dec, str, pi,);
}
}
Here %d is being used to read an integer value and we are passing &x to store the vale read input. Here &indicates the address of variavle x.
This program will prompt you to enter a value. Whatever value you will enter at command prompt that will be output at the screen using printf() function. If you eneter a non-integer value then it will display an error message.
Enter an integer: 20
Read in 20
A complete set of input output functions is given in C - Built-in Functions
Advertisement
|
http://www.tutorialspoint.com/ansi_c/c_input_output.htm
|
crawl-001
|
refinedweb
| 145
| 74.29
|
I'm on CentOS 5.2 and I'm having a problem booting my two database servers. Our IT department performed a SAN upgrade over the weekend and now I can't boot - they say the upgrade went fine but obviously something has happened. The error I get a boot time is this;
fsck.ext3: No such file or directory while trying to open /dev/VolGroup01/db
I have an external consultant who is looking at it and saying its a Superblock problem which can't be fixed, but thought these were recoverable (according to this at least;)
Anyone have any suggestions or pointers? Also, for future reference, what should I keep backups of beyond my data?
Utterly desperate and willing to pay for recovery at this point.
I am willing to bet you know that the SAN has shifted the beginning of the physical disk off by a few bytes. I've seen this before. Its a bitch to get your files off of a disk that has done this but it is possible.
If you run 'fdisk -l' do you get messages about the starting cylinders on the device not marrying up? Its usually in brackets around each partition declaration.
Do you manage to find the LVM groups but not the disk itself? Is the LVM device made up of multiple SAN disks and just one is affected?
The following script is going to try to search for the correct offset on /dev/sdb where your lvm partition starts. No guarantees it will find anything. If it does, you might be in a good position to recover your data.
#!/usr/bin/python
import sys
def BoyerMooreHorspool(pattern, text):
m = len(pattern)
n = len(text)
if m > n: return -1
skip = []
for k in range(256): skip.append(m)
for k in range(m - 1): skip[ord(pattern[k])] = m - k - 1
skip = tuple(skip)
k = m - 1
while k < n:
j = m - 1; i = k
while j >= 0 and text[i] == pattern[j]:
j -= 1; i -= 1
if j == -1: return i + 1
k += skip[ord(text[k])]
return -1
if __name__ == '__main__':
giveup = 1024*1024*1024*2
lba_offset = 0
text = ""
disk = open('/dev/sdb', 'r')
while disk.tell() < giveup:
#print "Checking: %f" % (lba_offset/(1024*1024))
text = disk.read(1048576)
s = BoyerMooreHorspool("\x00\x00\x00LVM2", text)
if s > -1:
print "Try offset: %d" % ((lba_offset+int(s))-533)
sys.exit(0)
else:
lba_offset += 1048576
print "Unable to find LVM position!"
Can you return what output you get?
And its all back and fixed. Turns out someone had mounted the LUNs to a Windows machine in error, then removed and put them onto the CentOS VMs without thinking it would cause a problem. Each partition was labelled as a "MICROSOFT RESERVED PARTITION" - used the cached LVM setting to bring it back.
Boot into single-user mode and comment out the line mounting that filesystem in /etc/fstab. That should let you boot and take a look at why that filesystem won't mount.
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
826 times
active
|
http://serverfault.com/questions/352453/centos-superblock-corruption
|
CC-MAIN-2015-40
|
refinedweb
| 525
| 72.26
|
Using AsWing/as3 with Haxe
This simple tutorial explains how to use the powerfull AsWing GUI framework within Haxe
Prepare the Extern Class Tree
- Download latest version af aswing/as3 here
- Extract it in a folder.
- Go to the AsWing\bin directory, and rename the file AsWing.swc to AsWing.zip
- Unzip this file with your favorite zip utility
Now you have two files: catalog.xml and library.swf. It is the library.swf that is important for us.
- Start up the command prompt, and go to the directory where library.swf resides.
- Execute:
haxe --gen-hx-classes library.swf
It will genererate all the extern haxe classes in a subdir “hxclasses” needed to use aswing in haxe.
Please note : some generated extern definitions need to be updated, typically where the equivalent AS3 method uses the '...' opertor in the parameter list. (addRow in org.aswing.ext.FormRow is an example). Whenever the compiler complains about one of these, substitute '...' by a number (however many you need) of optional parameters.
Here is an example:
//AS3: addRow(...columns):FormRow{ //haXe: addRow(?p1 : Dynamic, ?p2 : Dynamic, ?p3 : Dynamic, ?p4 :Dynamic, ?p5 : Dynamic):FormRow{
Update May 24, 2009 : Using haXe 2.0.3 on Windows XP, no issues found concerning remaining '...' operators. YMMV
Setup a Test Project
Now let's start a new haxe project for testing.
- Create a new project directory, and copy the complete generated tree (“org” directory) in here.
- Copy the library.swf also in your project directory. This library will be compiled into your final movie.
Create a file Hello.hx:
import org.aswing.JButton; import org.aswing.JFrame; import org.aswing.geom.IntDimension; import flash.Lib; import flash.display.Sprite; import flash.display.StageScaleMode; class Hello extends Sprite { public function new() { super(); //create a frame var frame : JFrame = new JFrame( this, "HelloApp" ); frame.setSize(new IntDimension( 200, 120 ) ); //create a button var button:JButton = new JButton("Hello from AsWing in Haxe!"); //add the button to the frame frame.getContentPane().append(button); //add the frame to the stage Lib.current.stage.addChild(this); this.stage.scaleMode = StageScaleMode.NO_SCALE; //show it! frame.show(); } public static function main() { new Hello(); } }
Create a file Hello.hxml:
-swf Hello.swf -swf-header 600:400:21:ffffff -main Hello -swf-lib library.swf -swf-version 9
Now let's compile it by executing:
>haxe Hello.hxml
When everything went fine, you should have a flash file Hello.swf displaying a frame with a button in it, saying Hello!
Notes
I had to use haxe version 1.19, the older one didn't work. There was an error generating the extern tree.
|
http://haxe.org/doc/flash/aswingas3
|
crawl-002
|
refinedweb
| 436
| 63.25
|
Important: Please read the Qt Code of Conduct -
QDialog and Inheritance
I have a custom
QDialog- for names sake lets call this
BaseDialog.
This
BaseDialogis set out using a vertical layout and there are 3 widgets present: a top widget containing a custom dialog title information, a bottom widget containing custom dialog buttons and a central widget which fills the remaining space on the form.
The central widget is where I need assistance.
What I want to achieve is this: I want to be able to inherit from this
BaseDialogand be able to replace the contents of its central widget with the content of the derived dialog.
At the moment, if I create a new
QDialogbut derive it from my
BaseDialogclass, when I add things like
QPushButtonwidgets to the form, at runtime if I'm not careful with placement, they will overlap the widgets which are part of
BaseDialog.
Is there a convenient way for me to achieve this?
UPDATE
What I've done so far is this:
Added a
protectedmember to the
BaseDialogclass, exposing the central widget as follows:
BaseDialog *BaseDialog::centralWidget() { return (BaseDialog *)ui->centralWidget; }
The derived class in its constructor, instead of calling
ui->setupUi(this);it calls
ui->setupUi(this->centralWidget());and so far, it seems fine. The only thing that doesn't seem to work is the resizing of the
BaseDialogto accomodate the derived dialogs size.
This post is deleted!
@artwaw Thanks for your reply.
Please can you elaborate on how
layoutswill help me in my situation? I'm already using layouts in my dialog and widget designs but I don't see how I can use them to achieve what I want as per above.
If the central widget and layout for this dialog was accessible to derived classes this would work. You would need to make sure that all subclass's of your basedialog can access this widget and layout (central widget and layout declared as protected and not private).
Doing it this way might be a bad idea. It requires your subclass's to know something about the parent in order to work. It might be better to create a unique central widget instead and only use one basedialog class. Just a thought.
Assuming the layout is set properly and there is enough space any call to layouts setItem or similar should result in proper placement, spacing and margins being set.
@Rondog Thanks for your response. I've just updated my original post - is this along the lines of what you were meaning?
@artwaw Ah ok, I think I'm following.
So I think what you're saying is create an instance of
BaseDialogbut then call
setItempassing in the instance of the content of the derived form to fill the layout?
@webzoid More or less, yes. If you know what size the dialog should have after the change you can always call resize() to set the expected size.
Going further I think that in theory you could compute expected size based on sizeHint() methods of the child items however I never did that personally.
|
https://forum.qt.io/topic/81682/qdialog-and-inheritance
|
CC-MAIN-2020-40
|
refinedweb
| 512
| 60.65
|
Agile development has created a culture of newly weds, programmers coupled in pairs oblivious to the fate that awaits them. As with all forms of coupling, the short-term benefits are outweighed by the long-term consequences. The optimism of a new relationship spelled out in code never lives up to the story, no matter how it is prioritised.
There has been much talk and many studies about how effective pair programming is, but clearly all those involved are looking for some kind of meaningful justification that makes sense of their predicament. Apparently pairing improves code quality and is enjoyable, but I doubt that: how can you really have fun and program well when you keep having to remove your headphones to listen to someone else questioning your mastery of code?
Good pairing is supposed to involve alternately navigating and driving. From what I can tell, this means navigating the quirks of another’s style and conventions while driving home your own beliefs about how to organise things properly. It is a contest in which there will be a winner and a loser. So much for team spirit!
I suspect that financial debt – which is like technical debt but with money – is a contributory factor. PCs, however, are not that expensive. Surely companies can spare enough money to supply each programmer with their own PC or, at the very least, a keyboard they can call their own? The point of a PC is that it’s personal – the clue is in the name. Sharing a computer is like sharing a toothbrush, only more salacious.
For example, the practice of promiscuous pairing is often promoted.
You swing from one partner to the next willingly, openly and frequently. Such loose coupling demonstrates a lack of commitment and sends out the wrong moral message. If you’re going to have to pair, you should do it properly, all the way from ‘I do’ to ‘Done’. It is likely that there will be an eventual be a separation of concerns, but that at least avoids the risk of communicating state-transition diagrams and infecting your C++ code with explicit use of the standard library namespace.
One thing that might be said in favour of pairing is picking up new skills. For example, I have learnt to use a Dvorak keyboard and a succession of editors with obscure key bindings and shortcuts. Being able to present new and existing partners with an unfamiliar and hostile environment puts them off their guard and sends out a clear signal about the roles in the relationship. I also find pairing can be effective with newbies. They can either sit and watch for a few hours or they can drive while you correct them from the back seat.
These benefits, however, are few and far between. The day-to-day reality is more cynical: the constant nagging, the compromises you make, the excuses you have to make up, the methods you use, the arguments, the rows, the columns... and you sometimes have to put up with your partner snoring after you’ve offered an extended and enlightening explanation of some minor coding nuance they seemed apparently unaware of!
So, don’t impair to code, decouple.
|
https://accu.org/index.php/journals/1983
|
CC-MAIN-2018-47
|
refinedweb
| 536
| 60.85
|
i am currently working on a program that generates 50 sets of six randomly generated lottery numbers, using a two dimensional array. i have been able to generate and sort these numbers in ascending order. however, my method for testing each row of the array for unique numbers is not working 100%. i have included the code as follows from a much earlier assignment. if someone could give me a clue as to how to test just 6 ramdomly generated numbers for uniqueness, i'me sure i can take it from there. even a simple algorithm might be all i need.
//T5Be01
//this program generates six random numbers
#include <iostream.h>
#include <stdlib.h>
#include <time.h>
#include <fstream.h>
void main()
{
short num1 = 0;
short num2 = 0;
short num3 = 0;
short num4 = 0;
short num5 = 0;
short num6 = 0;
srand (time(NULL));
num1 = 1 + rand() % (54 - 1 + 1);
num2 = 1 + rand() % (54 - 1 + 1);
num3 = 1 + rand() % (54 - 1 + 1);
num4 = 1 + rand() % (54 - 1 + 1);
num5 = 1 + rand() % (54 - 1 + 1);
num6 = 1 + rand() % (54 - 1 + 1);
cout<< "Num 1 = "<< num1<< endl;
cout<< "Num 2 = "<< num2<< endl;
cout<< "Num 3 = "<< num3<< endl;
cout<< "Num 4 = "<< num4<< endl;
cout<< "Num 5 = "<< num5<< endl;
cout<< "Num 6 = "<< num6<< endl;
}
|
http://cboard.cprogramming.com/cplusplus-programming/4805-unique-randomly-generated-numbers.html
|
CC-MAIN-2015-18
|
refinedweb
| 209
| 78.08
|
sounds like an issue I was having. when my internet went down while I watched youtube. the browser started freezing. chrome://flags/#offline-auto-reload when I changed this setting to disabled, the freezing didn't happen.
nanopi
@nanopi
Posts made by nanopi
- RE: STOP auto reloading of pages in Opera 46.0.2597.39?Opera for Windows
- RE: [Solved]URL for FULL opera windows 7 FOR off line installOpera for Windows
if you download the exe from FTP, check the digital signatures in file properties.
- RE: Where "Extended lazy session loading"???Opera for Windows
I used to unplug the ethernet when restarting Opera to get around this issue but got tired of that method quickly.
I started using a tab suspender to make up for the missing Extended lazy session loading.
I turned off automatic suspend so nothing changes unless I'm about to exit the browser.
it may be difficult to suspend tabs when restarting after a sudden crash and Opera's trying to load all the tabs but there is complete control when you're about to restart the browser.
I chose this extension because it preserves the icons while suspended.
- RE: Strange Problem - Delayed Response for BrowsingFuture releases
does this happen every time you restart the browser without rebooting?
- RE: Where's recently closed pages now?Future releases
can it be in both places?
many times I have closed 100 tabs at once and then a moment later wanted to bring back one that isn't in the 10 listed.
sometimes the tab list isn't wide enough to show me enough of a page title to tell it apart from others with similar page titles.
- RE: Twitch videos have "Flash out of date"Opera for Windows
- RE: google search result changed display.Opera for Windows
For anyone already using Tampermonkey, create a new script with the metadata below
copy the browser identification from Opera > About Opera
paste into metadata, remove OPR/ and everything after OPR/
save and test google
// ==UserScript== // @name Google Fix // @namespace // @version 0.1 // @description enter something useful // @include *.google.* // @user-agent paste browser identification here // ==/UserScript==
|
https://forums.opera.com/user/nanopi
|
CC-MAIN-2019-47
|
refinedweb
| 355
| 63.8
|
The QPictureIO class contains parameters for loading and saving pictures. More...
#include <QPictureIO>
This class is obsolete. It is provided to keep old source code working. We strongly advise against using it in new code..
Constructs a QPictureIO object with all parameters set to zero.
Constructs a QPictureIO object with the I/O device ioDevice and a format tag.
Constructs a QPictureIO object with the file name fileName and a format tag.
Destroys the object and all related data..)
Returns the picture description string.
See also setDescription().
Returns the file name currently set.
See also setFileName().
Returns the picture format string or 0 if no format has been explicitly set.
See also setFormat().
Returns the gamma value at which the picture will be viewed.
See also setGamma().
Returns a sorted list of picture formats that are supported for picture input.
Returns the IO device currently set.
See also setIODevice().
Returns a sorted list of picture formats that are supported for picture output.
Returns the picture's parameters string.
See also setParameters().
Returns the picture currently set.
See also setPicture().
Returns a string that specifies the picture format of the file fileName, or null if the file cannot be read or if the format is not recognized.::at().
Returns the quality of the written picture, related to the compression ratio.
See also setQuality() and QPicture::save().().
Sets the picture description string for picture handlers that support picture descriptions to description.
Currently, no picture format supported by Qt uses the description string.
See also description().
Sets the name of the file to read or write an picture from to fileName.
See also fileName() and setIODevice()..
See also gamma().().
Sets the picture's parameter string to parameters. This is for picture handlers that require special parameters.
Although the current picture formats supported by Qt ignore the parameters string, it may be used in future extensions or by contributions (for example, JPEG).
See also parameters().
Sets the picture to picture.
See also picture().
Sets the quality of the written picture to q, related to the compression ratio.
q must be in the range -1..100. Specify 0 to obtain small compressed files, 100 for large uncompressed files. (-1 signifies the default compression.)
See also quality() and QPicture::save().
Sets the picture IO status to status. A non-zero value indicates an error, whereas 0 means that the IO operation was successful.
See also status().
Returns the picture's IO status. A non-zero value indicates an error, whereas 0 means that the IO operation was successful.
See also setStatus().().
|
http://doc.trolltech.com/4.5-snapshot/qpictureio.html
|
crawl-003
|
refinedweb
| 425
| 63.05
|
Hi all,
The following C# sample shows a dialog to view a certificate and its properties. This is the same dialog that appears when we double-click on the cert file in Explorer. I'll use CryptUIDlgViewCertificate API and its CRYPTUI_VIEWCERTIFICATE_STRUCT structure to achieve this:
...
using System.Security.Cryptography.X509Certificates;
using System.Runtime.InteropServices;
namespace MyNamespace
{
public partial class MyClass
{
...
public const int CRYPTUI_DISABLE_ADDTOSTORE = 0x00000010;
[DllImport("CryptUI.dll", CharSet = CharSet.Auto, SetLastError = true)]
public static extern Boolean CryptUIDlgViewCertificate(
ref CRYPTUI_VIEWCERTIFICATE_STRUCT pCertViewInfo,
ref bool pfPropertiesChanged
);
public struct CRYPTUI_VIEWCERTIFICATE_STRUCT
{
public int dwSize;
public IntPtr hwndParent;
public int dwFlags;
[MarshalAs(UnmanagedType.LPWStr)]
public String szTitle;
public IntPtr pCertContext;
public IntPtr rgszPurposes;
public int cPurposes;
public IntPtr pCryptProviderData; // or hWVTStateData
public Boolean fpCryptProviderDataTrustedUsage;
public int idxSigner;
public int idxCert;
public Boolean fCounterSigner;
public int idxCounterSigner;
public int cStores;
public IntPtr rghStores;
public int cPropSheetPages;
public IntPtr rgPropSheetPages;
public int nStartPage;
}
private void MySampleFunction()
{
// Get the cert
X509Certificate2 cert = new X509Certificate2(@"C:\temp\mycert.cer");
// Show the cert
CRYPTUI_VIEWCERTIFICATE_STRUCT certViewInfo = new CRYPTUI_VIEWCERTIFICATE_STRUCT();
certViewInfo.dwSize = Marshal.SizeOf(certViewInfo);
certViewInfo.pCertContext = cert.Handle;
certViewInfo.szTitle = "Certificate Info";
certViewInfo.dwFlags = CRYPTUI_DISABLE_ADDTOSTORE;
certViewInfo.nStartPage = 0;
bool fPropertiesChanged = false;
if (!CryptUIDlgViewCertificate(ref certViewInfo, ref fPropertiesChanged))
{
int error = Marshal.GetLastWin32Error();
MessageBox.Show(error.ToString());
}
}
}
}
I hope this helps.
Regards,
Alex (Alejandro Campos Magencio)
Hi,
Very interesting code you have. I am wanting to one, detect the card, two, have a user enter a pin, and lastly, I want to read the card user information, which I will validate via other methods.
Will your sample, with some tweaking, allow me to do this?
Thanks in advance for your input!
Dave
I need to access a CTL on an IIS server using C#. I have been looking at your P/Invoke examples and I can probably do it using that. However, our customer has some concerns with using unmanaged code. I have been unable to get to the CTL using the X509 objects that I am aware of. Is there a way to access the CTL (assuming I know which store it is in and I know the CTL identifier from the SSLCtlIdentifier and SSLCtlStore values from AD/WMI). I just need to verify that the CTL contains certain roots and I can't seem to find a way to access it using C#.
The tool ssldiag.exe from MS shows the contents of the CTL, but I am not sure how it is implemented. I would like to do it in C# without having to use P/Invoke if at all possible.
Thanks in advance for any help you can provide.
I can't get this code working in VB.NET environment. I wrote such a code in VB.NET, but when I call CryptUIDlgViewCertificate() application throws an exception. I would be very thankful if you could try to do this in VB.NET and post here if it works. I could send you my code to review, but don't know your e-mail.
Certificate2 cert = new X509Certificate2(@"C:\temp\mycert.cer");
X509Certificate2UI.DisplayCertificate(cert);
Yes, thanks. This one works well. Just need to mention, that reference to "system.security.dll" should be added in VB.NET.
We are trying to automate importing of contacts to Outlook (both 2003 and 2007).
Basically we read the contact details from a DB or csv file and create the contact items using C# code.
In a folder we also have contact's certificate files that we would like to attach to the above created contacts in code.
We have a code for Outlook 2007 that uses propertyaccessor and which does the import with no reported errors. However when we open Outlook contacts manually, there are no certificates attached!
Tot troubleshoot, we compared, byte by byte, the certificate that we import via code and the same certificate that is manually attached to a contact.
They differ for few bytes both at the begining and at the end!
It seems that Outlook adds some header and footer bytes to the attached certificate when we import it manually.
If we artificially add those header and footer bytes to our certificate in code, it will be imported and shown later in Outlook just fine!
However these header/footer bytes seem to be certificate related and are not the same for every certificate. Just their total number is the same.
The question now is, how can we determine the process of adding these extra bytes before and after the certificate so it is always recognized by Outlook?
We searched on the internet a lot in regards to this problem but so far no solution was found.
I hope you can give us a clue on this?
Sorry, I don't know what outlook does, as I don't the product to that extent...
How do you display the dialog as a Modal Dialog as you can return to the main application and just keep clicking button to view certificates, not a good idea?
Hi Spitz,
Never tried it, but does it help if you pass a valid window handle in hwndParent param of the CRYPTUI_VIEWCERTIFICATE_STRUCT struct?
Yes, it does!
Though I worked this one out myself and only came here to update my findings lol
certViewInfo.hwndParent = this.Handle;
|
http://blogs.msdn.com/b/alejacma/archive/2009/02/13/how-to-view-a-certificate-programatically-c.aspx
|
CC-MAIN-2014-42
|
refinedweb
| 870
| 57.47
|
A utility package for managing a american football game, including scoring, down, distance and so on, Game Clock and game setup with team names and logos. Can be used for further implementation like live tickersor displaying scoreboards.
Project description
Pyfootballscoring
A utility package for managing a american football game, including scoring, down, distance and so on, Game Clock and game setup with team names and logos. Can be used for further implementation like livetickers or displaying scoreboards.
Getting Started
This package is developed with Python3 and was not tested with Python2, so there's no guarantee it will work under Python2.
Installation
The package can be installed by pip:
pip install footballscoring
This installs the only requirement, Apscheduler, with it.
Testing
Run unittests simply like this while in the main directory:
python -m unittest discover
Usage
For keeping track of the Score, Down etc. or the Game Clock, simply import the according class and instantiate it.
Game Clock Example
If you want to use the Game Clock, simply instantiate the GameClock object with the required quarter length in minutes. If you want to specify the interval in which the clock should be updated, you can specify this by specifying
interval_ms in milliseconds.
from footballscoring.gameclock import GameClock game_clock = GameClock(quarter_length=12, interval_ms=10)
Now you can simply start, stop, set or reset the clock by calling the according method.
game_clock.start() game_clock.stop() game_clock.reset_clock() game_clock.set_clock(minutes=2, seconds=3)
While creating and running the game, the package will keep track of the current game status and its validity regarding range of the values.
Contributing
Feel free to suggest more tests or features in the Issue Section or put it as a pull request.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/footballscoring/
|
CC-MAIN-2021-10
|
refinedweb
| 314
| 51.07
|
SYNOPSIS
#include <opendbx/api.h>
int odbx_escape (odbx_t* handle, const char* from, unsigned long fromlen, char* to, unsigned long* tolen);
DESCRIPTIONodbx_escape() neutralizes potentially dangerous characters of the string so it can be used as part of a statement. For security reasons every user input has to be passed to odbx_escape() to avoid SQL injection attacks which can have fatal consequences! It's also a good idea to escape strings returned from database fields again if you want to use them in a query because they don't stay escaped once they are returned as part of a record.
Most backends require the buffer to be more than twice as long as the input string. To be precise, the output buffer must be 2 * size of input + 1 bytes long. After successfully escaping the characters in from, they are written into the memory provided via to and the value/result parameter tolen is updated to the new length of to in the end.
The first parameter handle is the connection object created and returned by odbx_init() which becomes invalid as soon as it was supplied to odbx_finish().
from has to point to a character string containing the string which should be used as part of a statement. It doesn't have to be zero-terminated because the length of it is also given via fromlen. The backends may support variable width character sets like UTF-8 but this function doesn't support the wide char type (wchar_t) where each character has a fixed size of two or four bytes.
The value of the parameter fromlen must be the length in bytes of the string which from is pointing to. This is also true for variable width character sets like UTF-8 but the wide char type (wchar_t) is not supported. The terminating \0 character shouldn't be part of fromlen.
The calling function provides a buffer for storing the escaped string via to. In general, the length of the buffer should be more than twice as long as the string passed via from to be able to store the escaped string even if every character has to be escaped.
tolen is a value-result parameter which points to an integer variable in the calling function. It must contain the original length of the buffer given via to and if escaping the string in from suceeded, odbx_escape() will store the new length of the escaped string in this variable.
RETURN VALUEodbx_escape() because it wasn't able to escape the given string to be suitable for a statement
- -ODBX_ERR_PARAM
- One of the supplied parameters is invalid or is NULL and this isn't allowed in the used backend module or in the native database client library
- -ODBX_ERR_SIZE
- The length of the escaped string exceeds or is likely to exeed the available buffer (before 1.1.4 the name of the label was ODBX_ERR_TOOLONG but the value is still the same)
|
https://manpages.org/odbx_escape/3
|
CC-MAIN-2022-40
|
refinedweb
| 487
| 64.44
|
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
Recently emerge started to fail here with following error:
--------
<root@mordorpc portage> emerge -pv pygtk
These are the packages that would be merged, in order:
Calculating dependencies \Traceback (most recent call last):
File "/usr/bin/emerge", line 7928, in <module>
retval = emerge_main()
File "/usr/bin/emerge", line 7922, in emerge_main
myopts, myaction, myfiles, spinner)
File "/usr/bin/emerge", line 7164, in action_build
retval, favorites = mydepgraph.select_files(myfiles)
File "/usr/bin/emerge", line 2476, in select_files
expanded_atoms = self._dep_expand(root_config, x)
File "/usr/bin/emerge", line 2280, in _dep_expand
cp_set.update(db.cp_all())
File "/usr/lib64/portage/pym/portage.py", line 7561, in cp_all
for y in listdir(oroot+"/"+x, EmptyOnError=1, ignorecvs=1, dirsonly=1):
File "/usr/lib64/portage/pym/portage.py", line 290, in listdir
list, ftype = cacheddir(mypath, ignorecvs, ignorelist, EmptyOnError,
followSymlinks)
File "/usr/lib64/portage/pym/portage.py", line 226, in cacheddir
list = os.listdir(mypath)
OSError: [Errno 12] Cannot allocate memory: '/usr/portage/net-mail'
--------
This seems to be quite random as I was able to run this command when I run it 4
or 5 times. It seems to happen for random packages and always finally works
fine after some emerge reruns.
Anyway this machine has 2GB of ram then I guess out of memory situation is
quite impossible as I saw this message when booted in text mode without X.
Reproducible: Always
Steps to Reproduce:
Created an attachment (id=150265) [edit]
emerge --info
Created an attachment (id=150266) [edit]
Console output from example fail
As you can see first three commands fails, but next one works fine and another
fails too, etc.
It seems to be kernel related as I can observe this problem only on 2.6.25
kernel and not on 2.6.24.
Created an attachment (id=150362) [edit]
.25 config file
Hmmm, ping ? This is realy anonying when updating system. Any ideas what this
can be or how to debug it ?
Unless you show that the problem does not occur with a vanilla kernel (I'm not
sure exactly which kernel sources you are using), it's probably safe to assume
that it's an upstream kernel bug therefore you should be looking to kernel.org
for answers.
If this is a kernel bug it should be assigned to the kernel team. We don't
usually mark kernel issues resolved upstream until they been reported on the
kernel.org bugzilla.
Could you please provide your dmesg and "cat /proc/meminfo" from the system
after it starts showing these symptoms, thanks.
I guess this can be a portage, bug as this kind of problem should show for
other applications too ? System works perfectly stable for two days now with
heavy usage of deluge, firefox, gcc with no problems.
I can not see anything suspicious in dmesg, but I will attach both files as
suggested.
Created an attachment (id=153911) [edit]
cat /proc/meminfo
Created an attachment (id=153913) [edit]
dmesg
I think there is nothing unusual in it.
These are from immediately after you saw an OOM? Nothing unusual, as you say.
It could be a bug in portage, but it is strange that it only manifests under
2.6.25. Just to confirm, booting back into 2.6.24 (without changing anything
else) makes the problem go away, right? Could you upload your 2.6.24 config,
please, let's check there weren't any significant changes.
I see you are running with an unstable version of portage, does the problem
still occur if you switch to using 2.1.4.4?
I tired to look at strace output from faulty call, but I can not see anything
unusual.
Created an attachment (id=153949) [edit]
strace -f -o /tmp/st/portage-mem.log -- emerge gtk-engines-qt
I looked at the Python source, I think the error is coming from
Modules/posixmodule.c posix_listdir()
if ((dirp = opendir(name)) == NULL) {
return posix_error_with_allocated_filename(name);
}
i.e. opendir() on something is returning NULL, probably /usr/portage/net-mail
opendir() is implemented in libc as a wrapper around open() or something like
that
Then I looked at the strace logs, but it shows that opening
/usr/portage/net-mail quite early on was successful.
It gets to this part:
11723 open("/usr/portage/sec-policy", O_RDONLY|O_NONBLOCK|O_DIRECTORY|0x80000)
= 3
11723 fstat(3, {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
11723 getdents(3, /* 68 entries */, 4096) = 2568
11723 brk(0x2deb000) = 0x2d8a000
11723 mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
0) = 0x7f8708afb000
11723 getdents(3, /* 0 entries */, 4096) = 0
11723 close(3) = 0
11723 write(2, "Traceback (most recent call last"..., 35) = 35
There are no errors here, and nothing to do with net-mail
libc listdir always does getdents() twice or more in order to decide that it
has finished reading the directory (when it gets a return code of 0). The only
slightly unusual thing is the brk and mmap in the middle, but this seems to be
just python growing its data segment and allocating some anonymously mapped
memory. I don't see why it would have any effect on anything here.
strange...
Very strange. The memory allocation is very suspicious, given the error
reported by python, even though it succeeds(!)
I think the error is coming from just after the opendir, in the for loop
immediately below. The strace logs show getdents are happening, so it must be
inside the loop, doing readdir calls:
for (;;) {
Py_BEGIN_ALLOW_THREADS
ep = readdir(dirp);
Py_END_ALLOW_THREADS
if (ep == NULL)
break;
Then outside the loop:
if (errno != 0 && d != NULL) {
/* readdir() returned NULL and set errno */
closedir(dirp);
Py_DECREF(d);
return posix_error_with_allocated_filename(name);
}
Looking at the code one thing that jumps out is the use of errno directly to
check whether the loop was terminated successfully or on error. It looks like
PyEval_RestoreThread takes care not to modify errno, so that seems safe.
However a quick look on the python issue tracker gives:
This could explain the problem, but only if the path given was unicode. Marcin,
would you be able to recompile python with the patch given in that ticket? If
you need assistance in doing so then I'd be happy to help. It would be very
interesting to see if the problem goes away with it applied.
BTW, regarding net-mail, note that the reporter says it fails on random
directories. In this case it seems to have failed reading
"/usr/portage/sec-policy", but I doubt the particular directory matters much.
OK, tired this patch and it seems it's a bit broken as it makes python
completly unusable. 'emerge --help' throws:
Traceback (most recent call last):
File "/usr/bin/emerge", line 31, in <module>
import portage
File "/usr/lib64/portage/pym/portage.py", line 20, in <module>
import copy, errno, os, re, shutil, time, types
ImportError: No module named time
Looking to /usr/lib64/python2.5 directory shows me that lib-dynload directory
was empty (But it was not empty on portage workdir image)
About net-mail directory it indeed fails on random directory in portage only
the stacktrace is similar (Same functions shows and always end's with File
"/usr/lib64/portage/pym/portage.py", line 226, in cacheddir
list = os.listdir(mypath))
About unicode my system uses unicode by default.
Created an attachment (id=154619) [edit]
python -v /usr/bin/emerge
Verbose python output after patch
(In reply to comment #16)
> OK, tired this patch and it seems it's a bit broken as it makes python
> completly unusable. 'emerge --help' throws:
Hmm, looks like something went wrong somewhere. The patch is really quite
simple and limited in scope; it certainly shouldn't be causing that sort of
trouble. I've just applied it here without any problem, this is what I did:
ebuild /usr/portage/dev-lang/python/python-2.5.2-r4.ebuild unpack
patch -d /var/tmp/portage/dev-lang/python-2.5.2-r4/work/Python-2.5.2 -p1 <
proposed-patch.txt
ebuild /usr/portage/dev-lang/python/python-2.5.2-r4.ebuild compile install
sudo ebuild /usr/portage/dev-lang/python/python-2.5.2-r4.ebuild qmerge
> About unicode my system uses unicode by default.
Ah, very interesting...
(In reply to comment #18)
> ...I've just applied it here without any problem...
Whoa -- spoke too soon! Sorry about that, I didn't test correctly before. I get
the same error you did. For anyone else playing along at home -- don't follow
those previous instructions..
This problem was higly random then I can not be 100% sure it's gone, but I use
portage a few days now with this patch and there was no OOM messages.
I think it was it. ThX for not closing this bug and helping me out witch this
as I propably would never find issue1608818 on python bugzilla as it's quite
ancient now.
Excellent, glad to be of service.
Since this is seems like a fairly critical python bug (for you and anyone else
using unicode, anyway) I'll send it over to the Python team. I'm unfamiliar
with their procedures, but they may want to add the patch to our patch set
and/or try to push it upstream. I've also updated the ticket on the Python bug
tracker and uploaded the working version of the patch.
I want to confirm that this fixed this portage problem for good :) Also the
same problem appear on my gf machine and I wonder can it be pulled to python
ebuild as soon as possible ?
I see this patch was not included in new python 2.5.2-r5 as I start to observe
same problem here as soon as I updated python.
(In reply to comment #20)
>.
>
Hello
I have this problem to, but I'm green in gentoo, and I don't know what I have
to do whith this: python-2.5.2-unicode-listdir.patch. Can somebody explain me
in easy steps what I have to do ?
I download Python from python.org, change everything wat is written on page:
Next (./configure, make, make install) and no effect. I have always this same
report when I write emerge (something):
!!! Failed to complete portage imports. There are internal modules for
!!! portage and failure here indicates that you have a problem with your
!!! installation of portage. Please try a rescue portage located in the
!!! portage tree under '/usr/portage/sys-apps/portage/files/' (default).
!!! There is a README.RESCUE file that details the steps required to perform
!!! a recovery of portage.
No module named _socket
Traceback (most recent call last):
File "/usr/bin/emerge", line 28, in <module>
import portage
File "/usr/lib/portage/pym/portage.py", line 55, in <module>
import getbinpkg
File "/usr/lib/portage/pym/getbinpkg.py", line 10, in <module>
import
htmllib,HTMLParser,string,formatter,sys,os,xpak,time,tempfile,base64,urllib2
File "/usr/lib/python2.5/urllib2.py", line 92, in <module>
import httplib
File "/usr/lib/python2.5/httplib.py", line 71, in <module>
import socket
File "/usr/lib/python2.5/socket.py", line 45, in <module>
import _socket
ImportError: No module named _socket
Fixed in python-2.5.2-r7, sorry for the delay.
*** Bug 238174 has been marked as a duplicate of this bug. ***
|
http://bugs.gentoo.org/218378
|
crawl-002
|
refinedweb
| 1,908
| 65.12
|
dextool 2.0.0-rc.1
C/C++ tooling for mocking, mutation testing and visualisation
To use this package, run the following command in your project's root directory:
dextool
Dextool is a framework for writing plugins using libclang. The main focus is tools for testing and static analysis.
The plugins in a standard installation of Dextool are:
- Analyze. Analyze C/C++ code to generate complexity numbers such as McCabe.
- C TestDouble. Analyze C code to generate a test double implementation.
- C++ TestDouble. Analyze C++ code to generate a test double implementation.
- Mutate. Mutation testing tool for C/C++.
- GraphML. Analyze C/C++ code to generate a GraphML representation. Call chains, type usage, classes as groups of methods and members.
- UML. Analyze C/C++ code to generate PlantUML diagrams.
Plugin Status
- Analyze: production ready.
- C TestDouble: production ready. The API of the generated code and how it behaves is stable.
- C++ TestDouble is production ready. The API of the generated code and how it behaves is stable.
- Fuzzer: alpha.
- GraphML: beta.
- UML: beta.
- Mutate: production ready.
Getting Started
Dextool depends on the following software packages:
- llvm (4.0+, both libclang and LLVM, see below)
- llvm-xyz-dev (4.0+)
- libclang-xyz-dev (4.0+)
- cmake (3.5+)
- D compiler (dmd 2.088.1+, ldc 1.16.0+)
- sqlite3 (3.19.3-3+)
Dextool has been tested with libclang [4.0, 5.0, 6.0, 7.0, 8.0].
For people running Ubuntu two of the dependencies can be installed via apt-get. The version of clang and llvm depend on your ubuntu version.
sudo apt install build-essential cmake llvm-4.0 llvm-4.0-dev clang-4.0 libclang-4.0-dev libsqlite3-dev
It is recommended to install the D compiler by downloading it from the official distribution page.
# link curl -fsS | bash -s dmd
Once you have a D compiler, you also have access to the D package manager
dub. The easiest way to run dextool is to do it via
dub.
dub run dextool -- -h
But if you want to, you can always download the source code and build it yourself:
git clone cd dextool mkdir build cd build cmake -DCMAKE_INSTALL_PREFIX=/path/to/where/to/install/dextool/binaries .. make install -j2
Done! Have fun. Don't be shy to report any issue that you find.
Common Build Errors
component_tests Fail
The most common reason for why
component_tests fail is that clang++ try to use the latest GCC that is installed, but the c++ standard library is not installed for that compiler.
Try to compile the following code with clang++:
#include <string> int main(int argc, char **argv) { return 0; }
clang++ -v test.cpp
If it fails with something like this:
test.cpp:1:10: fatal error: 'string' file not found
it means that you need to install the c++ standard library for your compiler.
In the output look for this line:
/usr/bin/../lib/gcc/x86_64-linux-gnu/XYZ/../../../../include/c++
From that line we can deduce that the package to install in Ubuntu is:
sudo apt install libstdc++-XYZ-dev
Mismatch Clang and LLVM
To build dextool, the dev packages are required. Dextool is optimistic and assume that the latest and greatest version of llvm+libclang should be used. But this also requires that the dev packages are installed.
If you get this error:
libclang_interop.hpp:13:10: fatal error: clang/Analysis/CFG.h: No such file or directory #include <clang/Analysis/CFG.h>
It means that you need to install
llvm-x.y-dev and
libclang-x.y-dev for the version that Dextool detected.
SQLite link or missing
The sqlite3 library source code with a CMake build file in the vendor's directory. It is intended for those old OSs that have too old versions of SQLite.
To use it do something like this.
mkdir sqlite3 cd sqlite3 && cmake ../vendor/sqlite && make && cd .. # setup dextool build to use it mkdir build cd build && cmake .. -DSQLITE3_LIB="-L/opt/sqlite -lsqlite3"
Cmake is unable to find the D compiler
If you have a D compiler installed in such a way that it isn't available in
your
$PATH you can specify it manully.
cmake .. -DD_COMPILER=/foo/bar/dmd/2.088/linux64/bin/dmd
Usage
See the usage examples in respective plugin directory:
Credit
Jacob Carlborg for his excellent DStep. It was used as a huge inspiration for this code base. Without DStep, Dextool wouldn't exist.
- Registered by Joakim Brännström
- 2.0.0-rc.1 released 14 days ago
- joakim-brannstrom/dextool
- MPL-2
- Authors:
-
- Dependencies:
- none
- System dependencies:
- for ubuntu: sudo apt install build-essential cmake llvm-4.0 llvm-4.0-dev clang-4.0 libclang-4.0-dev libsqlite3-dev
- Versions:
- Show all 32 versions
- Download Stats:
0 downloads today
2 downloads this week
2 downloads this month
67 downloads total
- Score:
- 1.3
- Short URL:
- dextool.dub.pm
|
http://code.dlang.org/packages/dextool
|
CC-MAIN-2020-05
|
refinedweb
| 817
| 68.36
|
NoOneBAN USER
Look at that link there? That is the language we have created.
// ZoomBA
def is_isomorphic( s1, s2 ){
if ( (l = size(s1))!= size(s2) ) return false
m = dict()
for ( i : [0:l] ){
if ( s1[i] @ m ){
y = m[ s1[i] ]
} else {
y = m[ s1[i] ] = s2[i]
}
if ( y != s2[i] ) return false
}
true
}
println( is_isomorphic('','' )) // true
println( is_isomorphic('aabaac','xxtxxw' )) // true
println( is_isomorphic('ac','xxt' )) // false
println( is_isomorphic('acaa','xxtw' )) // false
Forced to give downvote to all. Think about it. What you are going to do when we have 0's lets say? One zero?
input: [1 2 0 5 6]
output: [ 0 0 60 0 0 ]
Hmm.
That was the trick all along.
I am hungry, SRH lost so frustrated, but this should do the trick :
/*
The find next lower tidy number is
not a classically trivial one.
Obvious trivial solution is :
def find_unoptimal( n ){ while( !is_tidy(n) ){ n -= 1 } }
But that is a mess.
A better solution will be:
1. Search for inversion of order ( d[i] > d[i+1] ) from left.
2. If on the inversion, we can down the digit:
Down(x) = x - 1
and d[i-1] < d[i] - 1 then we can down the digit, replace rest by 9.
3. If we can not do that ( 12222334445555123 ),
we search for a digit change from right: ( d[i]<d[i+1] )
where we can change d[i+1] to d[i] and replace all right digits by 9
*/
def find_last_tidy( n ){
sn = str(n)
l = size(sn)
i = index( [0:l-1] ) :: { sn[$.o] > sn[$.o+1] }
if ( i < 0 ) return n // this is first step
// is there a left digit? if not...
if ( i == 0 ) return int( '' + ( int(sn[0]) - 1 ) + ( '9' ** (l - 1) ) )
// there is
// if it is like 12222334445555|123 ?
// then we need to check j is the repeat size
j = index( [i:-1] ) :: { sn[$.o] != sn[i] }
ns = sn[0:i-j] + ( int(sn[i]) - 1 ) + ( '9' ** (l - i + j - 2 ) )
int ( ns )
}
println( find_last_tidy(@ARGS[0]) )
@rajendra : I actually gave you an up vote. Later, I found, almost there but not there.
Observe this :
Input : 332 Expected : 299
Actual : 229
I was looking for branch conditions. Gotha.
I guess the dumb approach wins here :-)
def find_unoptimal( n ){
while( !is_tidy(n) ){ n -= 1 }
n // return
}
Obviously I will try to improve, but there are some very interesting edge scenarios.
// ZoomBA
/* tidy no. */
def is_tidy( n ){
x = n % 10
y = n / 10
while ( y != 0 ){
break ( y % 10 > x )
x = y % 10
y = y / 10
}
return ( y == 0 )
}
println( is_tidy( @ARGS[0] ) )
/*
This is the one which should do it
*/
def has_3_consecutive( dates ){
if ( size(dates) < 3 ) return false
ld = list(dates)
// in long form 3 consecutive date are (D - d), D, (D + d)
exists([2: size(dates)]) where { 2 * ld[$.o -1 ] == ( ld[$.o-2] + ld[$.o] ) }
}
ms = mset( file( 'logfile.txt' ) ) as { #(date,key) = $.o.split('\t') ; key }
users_cons = select( ms ) where {
dates = sset ( $.o.value ) as { #(date,key) = $.o.split('\t') ; int( time(date,'MM/dd/yyyy') ) }
has_3_consecutive( dates )
} as { $.o.key }
(zoomba)input=[2,3,5,3,7,9,5,3,7]
@[ 2,3,5,3,7,9,5,3,7 ] // ZArray
(zoomba)ms = mset(input)
{2=1, 3=3, 5=2, 7=2, 9=1} // HashMap
(zoomba)l = list(ms.entries)
[ 2=1,3=3,5=2,7=2,9=1 ] // ZList
(zoomba)sortd(l) :: { $.l.value < $.r.value }
true // Boolean
(zoomba)l
[ 3=3,5=2,7=2,2=1,9=1 ] // ZList
(zoomba)x = fold(l,list()) -> { for(i:[0:$.o.value] ){ $.p += $.o.key } }
[ 3,3,3,5,5,7,7,2,9 ] // ZList
(zoomba)x = str(20,2)
10100 // String
(zoomba)x[-4]
0 // Character
(zoomba)
That is one way. Another is trivial using bit manipulation.
// ZoomBA
def get_all_unique( files ){
from( files, set() ) -> {
read($.o,true)
}
}
// cool?
x = get_all_unique( [ 'samples/scratch.zm' , '/Codes/zoomba/wiki/tmp.zm' ] )
println(x)
// ZoomBA
def get_pos( string ){
pos = { 'x' : 0 , 'y' : 0 }
tokens( string , '([NEWS](\\d*))' ) -> {
directive = $.o // the directive
direction = directive[0]
amount = int( directive[1:-1] , 1 )
printf( '%s %s\n', direction, amount )
switch( direction ){
case _'N' : pos.y += amount
case _'S' : pos.y -= amount
case _'E' : pos.x += amount
case _'W' : pos.x -= amount
}
}
pos // return it
}
// use it
s = 'E4N4S4W'
println ( get_pos(s) )
a = [1, 2, 1, 0, 5, 2, 4, 2, 3, 0, 1, 3, 2, 4]
ms = mset(a)
l = list( ms.entries )
sortd( l ) :: { $.left.value < $.right.value }
println( str( l , ',') -> { $.key } )
// ZoomBA.
// Basic trick, iterate over permutations, and collect elements
s = 'abcdc'
len = size(s)
sa = s.toCharArray
sorta(sa) // the base template
// iterate and select over all permutation indices
// specify the collector - classic example of from syntax
deranged = from ( perm( [0:len].asList ) , set() ) :: {
// when no element exists such that there is same char in place
!exists( $.o ) :: { sa[ $.o ] == s[ $.i ] }
// finally map permutation index back to the string
} -> { str( $.o ,'' ) -> { sa[ $.o ] } }
println( deranged )
Interesting problem. There can be at least 2 ways to get it done.
The trivial one:
def getRandomOdd( min, max){
r = random()
while ( (n = r.num(min,max)) % 2 == 0 );
n // return n
}
The problem is - not really uniform, in some sense. To generate uniform, we use a reverse map:
y_min, y_max : such that
2 * y_min + 1 -> min
2 * y_max + 1 -> max
If min, max are not that way, we change them by something. Now we have a cleaner solution:
def getRandomOdd( min, max){
r = random()
y_min = min/2
if ( 2 /? max ){ max - 1 }
y_max = max/2
2 * r.num( y_min, y_max ) + 1
}
:: - steephen
====
Your + operator is *mutable*. It should never be, definitely NO for a arithmetic type.
What you did is this:
stackoverflow.com/questions/4581961/c-how-to-overload-operator
wikipedia.org/wiki/Immutable_object
Sorting and everything is irrelevant here.
Read very carefully about the formulation : [ wikipedia.org/wiki/Partition_problem ]
========
The algorithm can be extended to the k-way multi-partitioning problem, but then takes O(n(k − 1)mk − 1) memory where m is the largest number in the input, making it impractical even for k = 3 unless the inputs are very small numbers.
=========
k = 2
M = [ -1,1,1,1,8,10 ]
N = size(M)
last = k ** N // the largest one k to the power N
min = num('inf') // well, infinity
min_p = null // nothing
// now we iterate - recursion only cool for divine
for ( n : [0:last] ){
s = str(n,k) // representing number n in base k
// reject when all the k digits do not exist in base k rep
continue( size(set(s.value) ) != k )
// left pad with zeros thus making the string N size
s = '0' ** ( N - size(s)) + s
// collect into k partitions ( change the char into int as key )
p = mset( M ) as def(inx){ int(s[inx]) }
// now generate total - calculating all pairs between 0,k-1
tot = sum ( comb( [0:k] , 2 ) ) as def(inx, pair){
// size(x) is abs(x)
size( sum(p[pair[0]]) - sum(p[pair[1]]))
}
if ( tot < min ){
min = tot
min_p = p
}
}
// well...
println(min)
println(min_p)
// and Finally, Galaxy has peace!
Sudip, I thought of upvoting you - and then I found this :
int[] arr = new int[]{-1, 1, 1, 1, 10, 8 };
int K = 2;
Your code shows:
=====
8 1 1 1
10 -1
=====
Actual solution :
=====
8 1 1 :: 10
10 -1 1 :: 10
=====
Think more about a precise formulation.
This problem is imprecise. Observe this:
a = [ 0 , 1000, 10000] // k = 1
a = [ 0 , 1000, 10000] // k = 3
Thus, the problem needs to be precisely stated.
The proper formulation will entail :
Given there is an array of sorted values, and a parameter k, group the values into k buckets (bucket_i s) such that :
M = sum ( abs( sum(bucket_i) - sum(bucket_j) ) )
gets minimized.
// ZoomBA
// define Point
def Point : {
$$ : def(x=0,y=0){ $.x = x ; $.y = y },
$add : def (o) { new ( Point , $.x + o.x , $.y + o.y ) },
$str : def(){ str('(%s,%s)', $.x, $.y)}
}
a = new ( Point, 1,2)
b = new ( Point, 3,4)
c = a + b
println(c)
Given people already are massively overthought this - and they did not think what should be thought about : en.wikipedia.org/wiki/Similarity_measure.
Take a pause and think this.
Is it transitive? That is :
A similar_to B and B similar_to C ==>(implies) A similar_to C
does it hold true? 99.999% of the real practical cases, it won't be.
Take name for example.
ram : ok, male
rama : ok, male again perhaps
ramaa : ambiguous - perhaps it is a female name ? Now thinking backward,
ramaa --> rama :: hmm. See the problem there?
what about roma? ramaa and roma are close but is ram and roma close? Well.
That... pose a problem.
How did we get it? By working in system - where close name matching may point to same individual - facebook is another classic example. Same problem persists in the Indian addresses.
Thus, we need to ask, is the similarity function transitive? That would have clear impact on any algorithm we try to apply.
// ZoomBA : the data
data = [ [1,3,5,9],
[1,2,1,2],
[1,4,7,9],
[20,25,20,35] ]
// operation 1 :
repeat_in_rows = sum ( data ) -> { (size( set($.o) ) != size($.o) )? 1:0 }
println( repeat_in_rows )
// operation 2 :
num_dup = 2
num_dup_in_rows = sum ( data ) -> {
// get mset, key, num_of_occurences
ms = mset($.o)
// when there are at least 2 keys with num_of_occurences >= num_dup :: dup
dups = select( ms.values ) :: { break( size($.p) >= num_dup ) ; $.o > 1 }
// simple
size(dups) == num_dup ? 1 : 0
}
println( num_dup_in_rows )
/*
// ZoomBA
/*
We are assuming only alpha.
Thus, the general regex for the string is :
word -> [a-zA-Z][\d]+
s = ( word )+
Thus, the problem can be solved by tokenization of word
and then expanding the word
*/
word = '[a-zA-Z][\\d+]'
string = "a4b2c2a3f1g2"
println(string)
// tokenize on string
l = tokens( string , word ) -> {
// extract letter
letter = (tokens($.o, '[a-zA-Z]' ))[0]
// extract frequency
frequency = (tokens($.o, '\\d+' ))[0]
// expand the letter with frequency : a ** 2 -> aa
letter ** int(frequency)
}
// finally catenate over the list
println( str(l,'') )
I will go with Chris, any day.
Just minimising more in ZoomBA - not using the sorted property:
println( select ( mset( array ) ) ) :: { $.value > size(array)/4 } )
/*
A better way to look it using sequences
Define a Sequence S, such that it is strictly increasing
and generated by the rule of sum of non-negative multiples of the numbers in the array.
Thus, S(0) = 0 and we go in a = [6,9,20]
S(1) = 6
S(2) = 9
S(3) = 12 = 6 * 2
S(4) = 15 = 6 + 9
S(5) = 18 = ( 3 * 6 , 9*2 )
We use ZoomBA to solve it and show a nice pattern.
This is solvable by adding 6,9,20 to each item encountered before in the sequence,
and check if the current item is minimum of the larger item than the current max
Then the next item is generated. Thus, the problem is solved when we have
Target n is such that
S(k) < n <= S(k+1)
To generate the this array a from bases is easy, and can be left as an exercise to the reader.
Hint: use the same algorithm and start with 0,min_of_base
*/
bases = [6,9,20]
// first few items of the sequence from the bases till 20
a = [ 0, 6, 9, 12 , 15 , 18, 20 ]
s = seq( a ) -> {
cached = $.p // previous items
last_no = cached[-1] // last item
maxes = list ( bases ) -> {
item = $.o // store the individual base items
ix = index ( cached ) :: { $.o + item > last_no }
cached[ix] + item // we find where we max - so store it
}
#(min,Max) = minmax( maxes ) // find min of the maxes
min // return min as the next item in the sequence
}
// now call
def find_some( n ){
if ( n <= s.history[-1] ) return n @ s.history // obvious
while ( s.history[-1] <= n ){ s.next } // iterate over
return n @ s.history // and then is trivial
}
println( find_some(47) )
println( find_some(23) )
Reasonable, but not bug free.
a = list( [ 0:10 ] ) -> { random(true)? 0 : random(10) + 1 }
def move_0_right(arr){
first_zero = 0
last_non_zero = size(arr) - 1
while ( true ){
while ( arr[last_non_zero] == 0 ) { last_non_zero -= 1 }
while ( arr[first_zero] != 0 ) { first_zero += 1 }
break( first_zero > last_non_zero )
arr[first_zero] = arr[last_non_zero]
arr[last_non_zero] = 0
}
last_non_zero + 1
}
println(a)
x = move_0_right(a)
println(a)
println(x)
// ZoomBA
A = [1,2,4,-6,5,7,9]
B = [3, 6, 3, 4, 0 ]
res = join ( A, B ) :: { 5 == sum($.o) }
println(res)
// ZoomBA
word = "ffgggtvshjsdhjfffffffhvjbjcharu"
max = { 'count' : 0, 'letter' : null , 'current' : 0 }
reduce ( word.value ) -> {
continue ( $.o == $.p ){ max.current += 1 ; $.o }
if ( max.count < max.current ){
max.count = max.current
max.letter = $.p
max.current = 1
}
$.o
}
max -= 'current'
println( max )
// ZoomBA
words = ['May', 'student', 'students', 'dog', 'studentssess',
'god', 'Cat', 'act', 'tab', 'bat', 'flow', 'wolf', 'lambs',
'Amy', 'Yam', 'balms', 'looped', 'poodle', 'john', 'alice' ]
// now do magic
m = mset( words ) -> { k = $.o.toLowerCase() ; str(sset(k.value) ,'') }
fold ( m ) -> { println( $.value ) }
Now the output :
$ zmb tmp.zm
[student, students, studentssess]
[tab, bat]
[Cat, act]
[john]
[alice]
[lambs, balms]
[May, Amy, Yam]
[looped, poodle]
[dog, god]
[flow, wolf]
Too complex impl presented, we can do much simpler:
def Sample : {
$$ : def(){
$.its = list([0:random(4) + 1 ]) ->{
l = list([0:random(5)]) ->{ random(100) }
println(l)
l.iterator()
}
$.index = -1
},
hasNext : def(){
while ( !empty($.its) ) {
$.index = ($.index + 1) % size( $.its )
it = $.its[$.index]
if ( it.hasNext ) { return true }
$.its.remove( $.index )
$.index -= 1
}
false
},
next : def(){ return $.its[ $.index].next() }
}
s = new ( Sample )
while ( s.hasNext() ){
printf(' %s ', s.next())
}
println()
Amazing complex solutions.
===============
NOTE: When you are given a hammer - everything looks like a nail.
Thus, simply because trie exists, does not need us to use it. :-)
===============
A much practical solution is this:
1. Take the filter string, replace every '*' with '.*' and append '^' in the front and '$' in the back.
2. Thus
ab* ==> ^ab.*$
3. Yes, you guessed it right, now simply store these things as pattern
4. and then loop over and match the patterns.
Cool?
// NOT FOR Competitive stuff - it is meaningless for practical purposes.
// but cool...
/*
Given S is then sorted sequence of numbers such that
Binary representation contains only 2 bits set to 1.
Find S(n).
We solve it by creating an iterator.
*/
s = seq( 3 ) -> {
bs = str($.p[-1],2)
li = rindex(bs.value, _'1' )
if ( li == 1 ){
r = 2 ** #|bs| + 1
} else {
bs.value[li] = _'0'
bs.value[li-1] = _'1'
r = int(bs,2,0)
}
r
}
fold ( [1:9] ) -> { println( s.next ) }
The question is ill posed - with the definition of identifier being a string.
Yes, identifiers are a special string - but not any string. One should formulate the problem as token matching using regex.
<ID> := [a-zA-Z_][a-zA-Z_0-9]*
assignment -> ID '=' [0-9]+ ';'
and we are good. Clearly in this form, we can make the assignment itself a regex and can use named patterns and that is the answer.
/* Sounds like cheating - it is not
The problem of iterating over power set
is done by sequences() function */
a = [1,2,3,20,4,9,89,54,21]
k = 16
v = find( sequences( [0:size(a)] ) ) :: {
sum( $.o ) -> { a[$.o] } == k
}
printf( 'Indices are: %s, size: %d\n', v.value, size(v.value) )
Cutting to the cheese - dropping all useless information the problem is succinctly summarised by :
=======================================
Given a list of circles of various radii and centre:
1. Would it be possible to move from one starting point to another
2. What would be the min path and the circles
=======================================
Thus we end up having a directed graph, where :
======
1. Nodes are the circles
2. Edges between nodes exist, if and only if the circles overlap
3. Circles having centre with y larger than the North bank y are north bank stones
4. Circles having centre with y smaller than the South bank y are south bank stones
=========
Now, we need to find out all paths from one bank to another, and the minimum path.
This is doable by iterating over all source nodes - and rejecting paths.
Do not try to solve it - it would be at best an useless exercise.
Here we go :
[ stackoverflow.com/questions/4530120/finding-the-longest-cycle-in-a-directed-graph-using-dfs ]
It is NP complete.
Will the strings be one in a language dictionary? In that case, people should simply send the cardinal number ( index ) of the word in the dictionary, as well as the dictionary identifier. This is known as code-book encoding, and is the fastest, and the cheapest encoding known to mankind.
[ wikipedia.org/wiki/Block_cipher_mode_of_operation ]
This question has deep formulation problem - of speaking in English.
Observe the following :
l1 = [ 1,1,1,1,1,1,2]
l2 = [1,2]
l3 = [ ]
What is the expected output?
Now, coming back again :
l1 = [ 1,1,1,1,1,1,2]
l2 = [1,2,2,2,2,2,2]
l3 = [ ]
what will be the output?
Moreover :
l1 = [ 1,2]
l2 = [2,3]
l3 = [1,3]
what will be the output?
The terms *repeated across* is not properly defined - in conjunction with
*it is ok to have repeated item in a list*.
Amazon recommendation system is not code, but mathematics and statistics, specifically linear algebra.
Here are links to how they have implemented it.
[ cs.umd.edu/~samir/498/Amazon-Recommendations.pdf ]
[ google.co.in/patents/US8095521 ]
critical_time = time() - 'P30d'
/* Extremely simple */
for( file('my_folder_path') ) {
if( $.file && time($.lastModified) < critical_time ){ $.delete }
}
/*
First - the standard way
*/
for( file('my_csv_file.csv') ) {
x = $.split(',')
println( size(x) > 1 ? x[-2] : '' )
}
/* The shell way:
unix.stackexchange.com/questions/17064/how-to-print-only-last-column
*/
/* shows how declarative paradigm is powerful */
def count_zero_sum_sub_arrays( arr ){
len = size(arr)
// generate sum from combinations of [0,1,2,...len-1] taking 2 at a time
sum ( comb( [0:len] , 2 ) ) ->{
// when the sum is 0 add 1, else 0
( 0 == sum( [$.0:$.1+1] ) -> { arr[$.o] } ) ? 1 : 0
}
}
arr = [-1,1,-1,1]
println( count_zero_sum_sub_arrays(arr) ).
Too much code. I don't like to code at all. Be lazy.
The result if you run it :
Let's try another one:
Hope this should do fine.- NoOne April 25, 2017
|
https://careercup.com/user?id=5687896666275840
|
CC-MAIN-2021-17
|
refinedweb
| 3,016
| 74.79
|
Neil Mitchell wrote: > Hi, > > With binary 0.5, src <- decodeFile "_make/_make" return $! src should close the file, assuming that all the data is read from the file, thanks to this patch: Mon Aug 25 23:01:09 CEST 2008 Don Stewart <dons at galois.com> * WHNF the tail of a bytestring on decodeFile, will close the resource For older versions, import qualified Data.Binary.Get as Get data EOF = EOF instance Binary EOF where get = do eof <- Get.isEmpty return (if eof then EOF else error "EOF expected") put EOF = return () ... (src, EOF) <- decodeFile "_make/_make" accomplishes the same effect. Btw, contrary to what Duncan said, Get is a lazy monad (lazy in its actions, that is): instance Binary EOF where get = do eof <- Get.isEmpty when (not eof) error "EOF expected" return EOF put EOF = return () does not help, because the result (EOF) does not depend on the value returned by isEmpty. The idea of using isEmpty for closing the file is not perfect though; due to the lazy nature of Get, there's a stack overflow lurking below: main = do encodeFile "w.bin" [0..1000000 :: Int] m <- decodeFile "w.bin" print $ foldl' (+) 0 (m :: [Int]) One idea to fix this is to force the read data before checking for EOF, as follows: data BinaryRNF a = BinaryRNF a instance (NFData a, Binary a) => Binary (BinaryRNF a) where get = (\a -> rnf a `seq` BinaryRNF a) `fmap` get put (BinaryRNF a) = put a main = do encodeFile "w.bin" [0..1000000 :: Int] (BinaryRNF m, EOF) <- decode `fmap` L.readFile "w.bin" print $ foldl' (+) 0 (m :: [Int]) HTH, Bertram
|
http://www.haskell.org/pipermail/haskell-cafe/2009-February/056579.html
|
CC-MAIN-2014-41
|
refinedweb
| 267
| 70.33
|
Plotting only specific points using matplotlib's imshow
import numpy as np import matplotlib.pyplot as plt N = 101 x = np.linspace(-1,1,N); ones = np.ones_like(x) coords = np.outer(ones,x) #x coords coords = np.concatenate([[coords], [coords.T]]) ourShape = np.zeros([N,N]) ourShape[np.square(coords[0,:,:]) + np.square(coords[1,:,:]) <= 1.] = 1. fig, ax = plt.subplots(); ax.imshow(ourShape) plt.show()
This plots a circle inscribed in a square. But how do I get python to plot only the blue region, which is part of the square and not the circle? To be clear, I do not want to just turn the circle white; I want it to not plot at all. I tried
ax.imshow(ourShape[ourShape < 1.])
and that produces a TypeError.
- Matplotlib linked graphical analysis with scatter and bar plot
I am doing financial data analysis and I have the following json data
data = { 'result 1': { 'Function values': [-0.006227277397795026, 0.06383399272822152], 'Optimized weights': [('bonds', 1.4711817429946462e-15), ('forex', 5.550786381057787e-17), ('bitcoin', 0.9999999999999982), ('gold', 1.3183117655012245e-16), ('nyse', 9.697481555102928e-21), ('restate', 1.9775692500329443e-16)]}, 'result 2': { 'Function values': [-0.005261601871866209, 0.0458160221997947] 'Optimized weights': [('bonds', 0.3615398862486454), ('forex', 4.466610459077055e-25), ('bitcoin', 0.612702265453708), ('gold', 2.427629820084969e-25), ('nyse', 6.605705692605664e-25), ('restate', 0.02575784829764666)]}, 'result 3': { 'Function values': [-0.006227277397795026, 0.06383399272822152] 'Optimized weights': [('bonds', 1.4712010644711342e-15), ('forex', 5.55073975746577e-17), ('bitcoin', 0.9999999999999982), ('gold', 1.3183006923981202e-16), ('nyse', 1.1072819449579044e-20), ('restate', 1.977567049973079e-16)]}, 'result 4': { 'Function values': [-0.0026994057255751948, 0.022810143209823008] 'Optimized weights': [('bonds', 0.1799574677817418), ('forex', 4.974066668895019e-25), ('bitcoin', 0.3047792953486622), ('gold', 0.0), ('nyse', 0.0), ('restate', 0.5152632368695961)]} }
The objective is to make the following kind of graph in which
Function valuesare plot on the scatter plot with the point marked with specific color and the same color will be used in the below bar chart to represent its
Optimized weightsvalues.
I know how to plot simple scatter plots and bar plots using matplotlib
import matplotlib.pyplot as plt import numpy as np # Separate Scatter plot plt.scatter(data['result 1']['Function values'], color='red') plt.scatter(data['result 2']['Function values'], color='blue') plt.scatter(data['result 3']['Function values'], color='orange') plt.scatter(data['result 4']['Function values'], color='green') plt.show() # Separate Bar plot for 1 results and similarly barplots for all asset_1 = data['result 1']['Optimized weights'] asset_1 = np.array(asset_1) n_assets = np.arange(6) plt.bar(asset_1[1], color='red') plt.xticks(n_assets, asset_1[0]) plt.show()
But here they are separated. Is there any to combine them like the figure I made above? Thanks
- Seaborn Lineplot Module Object Has No Attribute 'Lineplot'
Using seaborn's documentation code to generate a lineplot returns an AttributeError: 'module' object has no attribute 'lineplot'. I have updated seaborn and reimported the module and tried again, no luck. Did lineplot get retired, or is there something else going on?
import seaborn as sns; sns.set() import matplotlib.pyplot as plt fmri = sns.load_dataset("fmri") ax = sns.lineplot(x="timepoint", y="signal", data=fmri)
- Matplotlib tick label precision
When defining tick labels I get an abnormally high level of precision. For example:
import pylab as pl fig = pl.figure(figsize=(3.25, 2.5)) ax0 = fig.add_subplot(111) ax0.set_ylim([0, 0.5]) ax0.set_yticks(np.arange(0, 0.51, 0.1), minor=False) ax0.set_yticklabels(np.arange(0, 0.51, 0.1), fontsize=8) ax0.set_xlim([0, 0.5]) ax0.set_xticks(np.arange(0, 0.51, 0.1), minor=False) ax0.set_xticklabels(np.arange(0, 0.51, 0.1), fontsize=8) fig.show()
The output figure is below with the bad tick labels on the 0.3 marker (both x and y axes). I have tried using np.linspace, which yields the same problem.
I understand the issues with floating point precision, but I would expect the label to be rounded off a lot sooner. How do I correct this to only show the first decimal?
- ggplot using position dodge push bars to the left when some count is zero
When zero count cases are present,
geom_bar()with
position_dodge(preserve = 'single')and a 'fill' aesthetic leaves an empty space. If the missing bar is not the last, all remaining bars get pushed to the left. The fenomenon is visible in the following picture with a two-cases variable for the fill aesthetic.
Sometimes red bars seem one next to the other because some blue bars are missing.
I would like to keep the order even with missing cases, that is blue bars on the left and red ones on the right.
- Visualizing changes in number of followers according to user ID (values of column)
I have been working with R only for a short time.
I have a dataframe called "users" where column 2 "userid" consists of 27 user IDs (character) which are repeated several times as new information is added several times a day.
Column 5 "total_followers" shows the number of followers of the users (numeric/integer)
My question is:
How can I visualize the development/trend of the number of followers of each user according to user ID?
I would like to create a line for each user showing the development. (either one plot for all or one for each user.
Thank you!
- Analyse CAN signals in CANoe tests
When running tests via the CANoe test environment, I can check the instantaneous value of a signal using e.g.
checkSignalInRange(). For some signals it would make more sense to evaluate typical physical attributes like amplitude, frequency, period and mean value. Is there a way to do it in CANoe?
As an acceptable workaround, is it possible to set up some signals to be recorded during a test, and include the signal plot into the test report?
|
http://quabr.com/48758000/plotting-only-specific-points-using-matplotlibs-imshow
|
CC-MAIN-2018-34
|
refinedweb
| 970
| 70.7
|
ReleaseSemaphore
This function increases the count of the specified semaphore object by a specified amount.
- hSemaphore
[in] Handle to the semaphore object. The CreateSemaphore function returns this handle.
- lReleaseCount
[in] Specifies the amount by which the current count of the semaphore object is to be increased. The value must be greater than zero. If the specified amount would cause the count of the semaphore to exceed the maximum count that was specified when the semaphore was created, the count is not changed and the function returns FALSE.
- lpPreviousCount
[out] Long pointer to a 32-bit variable to receive the previous count for the semaphore. This parameter can be NULL if the previous count is not required.
Nonzero indicates success. Zero indicates failure. To get extended error information, call GetLastError..
Each object type, such as memory maps, semaphores, events, message queues, mutexes, and watchdog timers, has its own separate namespace. Empty strings, "", are handled as named objects. On Windows desktop-based platforms, synchronization objects all share the same namespace.
|
http://msdn.microsoft.com/en-us/library/bb202751.aspx
|
CC-MAIN-2014-42
|
refinedweb
| 168
| 59.4
|
I'm new to swift and I'm trying to make a little simple app that solves equations (linear or quadratic ones). The user has to enter the values for a,b,c (the equation is: ax^2 + bx + c = 0) and, if a = 0, the app solves the linear equation, if a != 0, the app solves the quadratic equation (or tells it has no solution).
The problem is.. the linear equations part works just fine and gives the right results, but the app gives wrong results for the quadratic equations!
For example if a = 4, b = 12, c = 8, the result should be x1 = - 1, x2 = -2, but the solutions I get here in the app are x1 = -16.0, x2 = -32.0
x1/x2 seems to be right (0.5) but the results are completely wrong
This is the code I wrote
import UIKit
import Foundation
class ViewController: UIViewController {
@IBOutlet weak var EnterA: UITextField!
@IBOutlet weak var EnterB: UITextField!
@IBOutlet weak var EnterC: UITextField!
@IBOutlet weak var LabelYourEquation: UILabel! // gonna deal with this later
@IBOutlet weak var Button: UIButton!
@IBOutlet weak var LabelX1: UILabel!
@IBOutlet weak var LabelX2: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
Hide()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
var a = Double()
var b = Double()
var c = Double()
var delta = Double()
var x1 = Double()
var x2 = Double()
var linearEquation = Bool()
var quadraticEquation = Bool()
func checkEquation() {
if a == 0 {
linearEquation = true
quadraticEquation = false
} else if a != 0 {
quadraticEquation = true
linearEquation = false
}
}
func linearEquationResolution() -> Double {
x1 = -c / b
LabelX2.hidden = true
LabelX1.text = "x = \(x1)"
return x1
}
func Delta() -> Double {
delta = (b * b) - (4 * a * c)
return delta
}
func quadraticEquationResolution() {
if delta >= 0 {
x1 = ( -b + sqrt(delta)) / 2*a
x2 = ( -b - sqrt(delta)) / 2*a
LabelX1.hidden = false
LabelX2.hidden = false
LabelX1.text = "x1 = \(x1)"
LabelX2.text = "x2 = \(x2)"
} else if delta < 0 {
LabelX1.text = "The equation has no solution"
LabelX2.hidden = true
}
}
func solveIt() {
if linearEquation == true {
linearEquationResolution()
} else if quadraticEquation == true {
Delta()
quadraticEquationResolution()
}
}
func Hide() {
LabelX2.hidden = true
LabelX1.hidden = true
}
@IBAction func SolveItAction(sender: AnyObject) {
a = Double(EnterA.text!)!
b = Double(EnterB.text!)!
c = Double(EnterC.text!)!
checkEquation()
solveIt()
}
}
|
https://codedump.io/share/1DbCsbkTV7g4/1/swift---quadratic-equation-solver-gives-wrong-results
|
CC-MAIN-2017-30
|
refinedweb
| 373
| 60.31
|
This page provides an overview of the Google Kubernetes Engine dashboards available in Google Cloud Console.
Overview
Cloud Console offers useful dashboards for your project's GKE clusters and their resources. You can use these dashboards to view, inspect, manage, and delete resources in your clusters. You can also create Deployments from the Workloads dashboard.
In conjunction with the
gcloud and
kubectl command-line tools, the
GKE dashboards are helpful for DevOps workflows,
troubleshooting issues, and when working with multiple GKE
clusters or Google Cloud Platform projects. Rather than using the
command-line to query clusters for information about their resources, you can
use these dashboards to get information about all resources in every cluster
quickly and easily.
The following dashboards are available for GKE:
- Kubernetes clusters displays the clusters in your current project. Displays each cluster's name, compute zone, cluster size, total cores, total memory, node version, outstanding notifications, and labels.
- Workloads displays workloads (Deployments, StatefulSets, DaemonSets, Jobs, and Pods) deployed to clusters in your current project. Includes each workload's name, status, type, number of running and total desired Pods, namespace, and cluster. Features a YAML-based text editor for inspecting and editing deployed resources, and a Deploy mechanism for creating stateless applications in your clusters.
- Services displays your project's Service and Ingress resources. Displays each resource's name, status, type, endpoints, number of running and total desired Pods, namespace, and cluster.
- Configuration displays your project's Secret and ConfigMap resources.
- Storage displays PersistentVolumeClaim and StorageClass resources associated with your clusters.
GKE dashboards
The following sections discuss each dashboard and its features.
Kubernetes clusters
Kubernetes clusters shows every Kubernetes cluster you have created in your project. You can use this dashboard to inspect details about clusters, make changes to their settings, connect to them using Cloud Shell, and delete them.
Additionally, you can easily upgrade your cluster and node versions from this dashboard. When a new upgrade is available, the dashboard displays a notification for the relevant cluster.
You can select a cluster to view a page about that cluster, which includes the following tab views:
- Details displays the current settings for the cluster and its node pool.
- Storage displays the persistent volumes and storage classes provisioned for the cluster's nodes.
- Nodes lists all of the cluster's nodes and their requested CPU, memory, and storage resources.
From this dashboard, you can select a cluster and click Edit to make changes to the cluster's settings.
Workloads
You can use the Workloads dashboard to inspect, manage, edit, and delete workloads deployed to your clusters.
You can also deploy stateless applications using the menu's Deploy mechanism. For more information, refer to Deploying a Stateless Application.
You can select a workload from the list to view a page about that resource, which includes several tab views:
- Details displays the current settings for the workload, including its usage metrics, labels and selectors, update strategy, Pods specification, and active revisions.
- Managed pods lists the Pods that are managed by the workload. You can select a Pod from the list to view that Pod's details, events, logs, and YAML configuration file.
- Revision history lists each revision of the workload, including the active revision.
- Events lists human-readable messages for each event affecting the workload.
- YAML displays the workload's live configuration. You can use the YAML-based text editor provided in this menu to make changes to the workload. You can also copy and download the configuration from this menu.
You can use the dashboard's filter search to list only specific workloads. By default, Kubernetes system objects are filtered out.
Some workloads have an Actions menu with convenient buttons for performing common operations. For example, you can autoscale, update, and scale a Deployment from its Actions menu.
Services
Services displays the load-balancing Service and traffic-routing Ingress objects associated with your project. It also displays the default Kubernetes system objects associated with networking, such as the Kubernetes API server, HTTP backend, and DNS.
You can select a resource from the list to view a page about that resource, which includes several tab views:
- Details displays information about the resource, including its usage metrics, IP, and ports.
- Events lists human-readable messages for each event affecting the resource.
- YAML displays the resource's live configuration. You can use the YAML-based text editor provided in this menu to make changes to the resource. You can also copy and download the configuration from this menu.
Configuration
Configuration displays configuration files, Secrets, ConfigMaps, environment variables, and other configuration resources associated with your project. It also displays Kubernetes system-level configuration resources, such as tokens used by service accounts.
You can select a resource from this dashboard to view a detailed page about that resource. Sensitive data stored in Secrets is not displayed in the console.
Storage
Storage lists the storage resources provisioned for your clusters. When you create a PersistentVolumeClaim or StorageClass resource to be used by a cluster's nodes, those resources appear in this dashboard.
This dashboard has the following tab views:
- Persistent volume claims list all PersistentVolumeClaim resources in your clusters. You use PersistentVolumeClaims with StatefulSet workloads to have those workloads claim storage space on a persistent disk in the cluster.
- Storage classes list all StorageClass resources associated with your nodes. You use StorageClasses as "blueprints" for using space on a disk: you specify the disk's provisioner, parameters (such as disk type and compute zone), and reclaim policy. You also use StorageClass resources for dynamic volume provisioning, which allow you to create storage volumes on demand.
You can select a resource from these dashboards to view a detailed page for that resource.
Kubernetes Dashboard
The Kubernetes Dashboard add-on is disabled by default on GKE.
Starting with GKE v1.15, you will no longer be able to enable the Kubernetes Dashboard by using the add-on API. You will still be able to install Kubernetes Dashboard manually by following the instructions in the project's repository. For clusters in which you have already deployed the add-on, it will continue to function but you will need to manually apply any updates and security patches that are released.
Cloud Console provides dashboards to manage, troubleshoot, and monitor your GKE clusters, workloads, and applications.
|
https://cloud.google.com/kubernetes-engine/docs/concepts/dashboards?hl=th
|
CC-MAIN-2019-51
|
refinedweb
| 1,047
| 55.34
|
at the source code).
Find below the
Messenger interface that the
Groovy bean is going to be implementing, and Section 26 Section 26 anyway (let's leave the discussion as to whether that is a good thing to another day). Section 26.
The
GroovyObjectCustomizer
interface is a callback that allows you to hook additional
creation logic into the process of creating a Groovy-backed bean.
For example, implementations of this interface could invoke
any required initialization method(s), or set some default property
values, or specify a custom
MetaClass.
public interface GroovyObjectCustomizer { void customize(GroovyObject goo); }
The Spring Framework will instantiate an instance of your Groovy-backed
bean, and will then pass the created
GroovyObject
to the specified
GroovyObjectCustomizer
if one has been defined. You can do whatever you like with the supplied
GroovyObject reference: it is expected
that the setting of a custom
MetaClass is what most
folks will want to do with this callback, and you can see an example
of doing that below.. Consult the relevant section of the Groovy
reference manual, or do a search online: there are plenty of articles
concerning this topic.
Actually making use of a
GroovyObjectCustomizer
is easy if you are using the Spring 2.0 namespace support.
<!-- are not using the Spring 2.0 namespace support, you can still
use the
GroovyObjectCustomizer functionality.
"/> Section 5.2, .") } }
This last section contains some bits and bobs related to the dynamic language support.
It is possible to use the Spring AOP framework to advise scripted beans. The Spring AOP framework actually is unaware that a bean that is being advised might be a scripted bean, so all of the AOP use cases and functionality that you may be using or aim to use will work with scripted beans. There is just one (small) thing that you need to be aware of when advising scripted beans... you cannot use class-based proxies, you must use interface-based proxies.
You are of course not just limited to advising scripted beans... you can also write aspects themselves in a supported dynamic language and use such beans to advise other Spring beans. This really would be an advanced use of the dynamic language support though.
In case it is not immediately obvious, scripted beans can of course be scoped
just like any other bean. The
scope attribute on the
various
<lang:language/> elements allows you to
control the scope of the underlying scripted bean, just as it does with a
regular bean. (The default scope is
singleton, just as it
is with 'regular' beans.).
|
http://docs.spring.io/spring/docs/3.0.5.RELEASE/reference/dynamic-language.html
|
CC-MAIN-2014-15
|
refinedweb
| 426
| 52.29
|
Access IP camera from Qt
I need to access a IP camera from my Qt Application. I'm using the src of the camera image () and updating this image with a Timer on QML.
Timer { interval: 200 running: true repeat: true onTriggered: { image1.cache =false; image1.source = ""; } }
Executing only the code with the Camera, it works fine. The problem happen when I load all the application, the Camera is updated very slow. I'm using a embedded Linux and the resources are limited (512MB RAM, ARM Cortex-A5 500 MHz ). Besides, I need to use the QT4.8 to do this (necessarily).
Any idea to improve the performance? Thanks
Hi and welcome to devnet,
What sizes are the images ? Can you configure your camera to send something more "friendly" e.g. RGB images ?
Thanks for your answer @SGaist . Well, By default my camera configurations is: Resolution: 320x240, FPS: 15, Jpeg Quality : Medium. With these configurations, is very slow the update of the image.
I tryed to configure as: Resolution: 160x120, FPS: 15, Jpeg Quality: Very slow. Its works more fast, but the quality is very bad.
I think that maybe there are another way to do camera streamming, without get of the image from link () in a interval.. :(
What type of camera is it ?
"Very slow" doesn't tell much. But apparently you are asking for a frame every 200 ms, giving 5 fps, which obviously is very slow if you expect normal video display quality...
@mvuori Yes, I think strange too. But, I'm using the same program to do written on serial port. Each written on serial port is with interval around 300ms. The return of the serial port, is writing on QML too.
Are you doing any blocking function call ? e.g. for your serial port ?
Can you describe the various parts of your application so we can ensure that they are not influencing each other.
By the way, there's no need to set cache each time. Just set the property where you declarer image1. That's one less thing consuming time uselessly.
@SGaist It's my main function:
int main(int argc, char *argv[]) { QApplication app(argc, argv); QDeclarativeView *engine = new QmlApplicationViewer(); Gps *gps = new Gps(); // Class to read Gps (serial port 0). Interval: 10000ms SerialPort *serial = new SerialPort(); // Read Serial Port (serial port 1). Interval: 300ms if(serial->getMonitor()) { qDebug() << "Starting Monitor"; Monitor *monitor = new Monitor(engine, serial); serial->setMonitor(monitor); } return app.exec(); }
The class SerialPort start the write on serial port with a forever loop. Its working well, without delays.
When the serial read the response, the response is sent to monitor Class and this class is responsable to format the digits and write on screen (QML file). Therefore, each 300 ms my QML file is updated too (with the response of serial port).
Then where's the camera in that setup ?
@SGaist The camera is loaded in a different QML, look:
import QtQuick 1.1 Rectangle { width: 300 height: 400 Image { id: camera1 objectName: "camera1" x: 39 y: 0 width: 160 height: 120 source: "" cache: false Timer { interval: 300 running: true repeat: true onTriggered: { camera1.source = ""; } } } ... }
The QML is loaded on Monitor class:
Monitor::Monitor(QDeclarativeView *engine, Monitor *m) { engine->setSource(QUrl("qrc:/views/Monitor.qml")); window = qobject_cast< QObject * >( engine->rootObject() ); engine->showFullScreen(); monitor = m; engine->rootContext()->setContextProperty("mainmonitor",this); }
You can also try to load next image when previous one is loaded by checking the status.
I use this in my application when loading jpeg from cameras.
You just need to set 2 Image objects and show them when Image is ready
@kolegs how I can do to check the status? Could you show an exemple? Are you using the QML Time to update the frames? Thank you
You just need to create two Images with same position add property
visible: status == Image.ready
and on also add sth like this
onStatusChanged:{ if(status == Image.ready){ //here reload the other image image2.source = ""; image2.source = cameraSource; } }
Add the same thing in image2 but this timer reload image1
onStatusChanged:{ if(status == Image.ready){ //here reload the other image image1.source = ""; image1.source = cameraSource; } }
Also at creation just set source to cameraSource for one image and for the other leave it empty,.
Hope it helps.
@kolegs Thank you so much. I created this code and the performance was very better. I'm reading two IP cameras now with it.
Of course that the delay is existing (each 2 seconds the image is updated). I think that I need to improve my code. I'm using 98% of my CPU, but I need to do many routines :(
Thank you so much again
|
https://forum.qt.io/topic/67303/access-ip-camera-from-qt
|
CC-MAIN-2017-39
|
refinedweb
| 779
| 59.09
|
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Shall I record the user's name when he/she modify some fields?
for example:
.....
NO = fields.Char('Number')
Modifier=fields.Many2one('res.users',string='Creator')
.....
I want the effect:
when somebody modified the field "NO"; his/her name should be record by "Modifier";
I use "onchange" and "depends", but failed; could you give me some advice?
Hi,
You just have to check if NO field is modified or not and then store the current logged in user's ID in modifier.
EX:
@api.multi
def write(self, vals):
if 'NO' in vals:
vals.update({'Modifier': self._uid})
return super(your_class_name, self).write(v share whatever code you have tried already.
|
https://www.odoo.com/forum/help-1/question/shall-i-record-the-user-s-name-when-he-she-modify-some-fields-105024
|
CC-MAIN-2016-50
|
refinedweb
| 143
| 64.41
|
Last Saturday.
Tutorial Table of Contents:
- Part 1: Collecting data
- Part 2: Text Pre-processing
- Part 3: Term Frequencies
- Part 4: Rugby and Term Co-Occurrences (this article)
- Part 5: Data Visualisation Basics
- Part 6: Sentiment Analysis Basics
- Part 7: Geolocation and Interactive Maps), ('🍀', com), ('🍀',.
- Part 1: Collecting data
- Part 2: Text Pre-processing
- Part 3: Term Frequencies
- Part 4: Rugby and Term Co-Occurrences (this article)
- Part 5: Data Visualisation Basics
- Part 6: Sentiment Analysis Basics
- Part 7: Geolocation and Interactive Maps
24 thoughts on “Mining Twitter Data with Python (Part 4: Rugby and Term Co-occurrences)”
Happy New Year! Ripping blog – keep it up!
For the newer programmer following along, don’t forget to import defaultdict from collections:
from collections import defaultdict
Thanks Dave, I’ve updated the post, good catch
Cheers,
Marco
Hii Marco
How can I download the tweet between specific time like you have downloaded between 12:15PM to 7:15PM GMT.
Also, what can I do to download the tweets from the time which is already passed like if I want to download yesterday tweets?
Hello, Marcos, for some reason I’m getting this error:
Traceback (most recent call last):
File “C:\Users\Afonso\Documents\Workspaces\Workspace JAVA\Mineracao\Projeto.py”, line 207, in
tweet = json.loads(line)
File “C:\Users\Afonso\AppData\Local\Programs\Python\Python35\lib\json\__init__.py”, line 319, in loads
return _default_decoder.decode(s)
File “C:\Users\Afonso\AppData\Local\Programs\Python\Python35\lib\json\decoder.py”, line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File “C:\Users\Afonso\AppData\Local\Programs\Python\Python35\lib\json\decoder.py”, line 357, in raw_decode
raise JSONDecodeError(“Expecting value”, s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
My archive is working well on all the other codes, but in this part of the Article it complain about the json.line(), please help!
Hello Marco!
Can you explain what this line of code does?
t1_max_terms = max(com[t1].items(), key=operator.itemgetter(1))[:5]
Doesn’t max() return a single maximum value?? Why have you used [0:5] at the end?? Are you actully sorting the dict using itemgetter(1) ?? All in all, what exactly is t1_max_terms??
Thanks!
Hi, you are correct, max() returns a single value. In fact, that was some copy&paste mistake of mine from previous experiments, thanks for spotting it. The correct function to use is sorted(), so t1_max_terms is the list of (five) terms with the highest co-occurrence frequency for the term t1. I’ve updated the snippet accordingly
Best regards,
Marco
this line
print("Co-occurrence for %s:" % search_word)
is giving me a syntax error(invalid syntax) using python 2.7 and i am unable to sort it out , any help is appreciated
Hi,
wordpress from time to time decides to do encode html entities without asking :)
I’ve fixed the line in the article, thanks for spotting it:
Side comment, the code is developed and tested for Python 3.4/3.5 so some incompatibilities with Python 2 might come up occasionally.
Cheers,
Marco
thank you so much, another question came up while analyzing these codes,
how can we remove emojis from the data?
the code you shared is taking care of emoticons and all but emoji is a problem , i am only doing hash search because of this shortcoming.
but i am wasting a lot of data power by neglecting the other terms.
LikeLiked by 1 person
Thanks so much for this great tutorial! I’m really really new to Python and text mining. What do you mean by “pass a term as a command line argument”? Where do I put the particular search term I want to find co-occurrence in that last code? Thanks again.
Hi Jessica
when you run your script, you can pass additional command line arguments which are accessible in the sys.argv array (the first element of the array, sys.argv[0], contains the name of the script, the second element sys.argv[1] contains the first command line argument, etc.). The command is:
python your_script.py term_to_search
so in that snippet the variable sys.argv[1] takes the value term_to_search
Cheers,
Marco
Hello Marco,
I’m having the problem that my results are showing as unicode strings and I dont know how to fix it.
Example:
[((u’\ud83d’, u’\ude02′), 5269), ((u’de’, u’que’), 4210), ((u’de’, u’la’), 4208), ((u’de’, u’\xf3′), 3307), ((u’de’, u’\xed’), 2825)]
Hi Hector,
the tokenizer described based on regular expressions described in the earlier articles is fairly simple and designed mainly for English. You can check out some tools from the NLTK like word_tokenize and TweetTokenizer, also have a look at this discussion about unicode encoding issues:
Cheers
Marco
Hi Marco,
very nice tutorial and thanks for sharing it! I have the same problem of Hector for french text and I didn’t figure out how to solve it.
@Hector did you find a solution?
Salvatore
I have also met this problem in processing Chinese and I don’t how to fix it.
Example:
[(u’\u5c0f\u660e’, defaultdict(int, {u’\u6bd5\u4e1a’: 1})),
(1, defaultdict(int, {})),
(u’\u6bd5\u4e1a’, defaultdict(int, {u’\u4e8e’: 1})),
(u’\u5728′, defaultdict(int, {u’\u65e5\u672c\u4eac\u90fd\u5927\u5b66′: 1})),
(u’\uff0c’, defaultdict(int, {u’\u540e’: 1})),
(u’\u4e8e’, defaultdict(int, {u’\u4e2d\u56fd\u79d1\u5b66\u9662′: 1})),
(u’\u65e5\u672c\u4eac\u90fd\u5927\u5b66′,
defaultdict(int, {u’\u6df1\u9020′: 1})),
(u’\u4e2d\u56fd\u79d1\u5b66\u9662′,
defaultdict(int, {u’\u8ba1\u7b97\u6240′: 1})),
(u’\u540e’, defaultdict(int, {u’\u5728′: 1})),
(u’\u8ba1\u7b97\u6240′, defaultdict(int, {u’\uff0c’: 1}))]
Hi libin,
for Chinese you’ll probably need a specialised library for tokenisation. For example check out (I haven’t worked with Chinese so I can’t give you more details atm)
Cheers,
Marco
Hi, Marco,
Thank you for your kind reply.
I have figure it out through “encode(‘utf-8’)” and I have got the co-occurrence bigram, I am now focus on how to build a visualized network.
Libin
Hey Marc, How to add suspension points symbol(…) to list of stop words. I have tried
this change
stop = stopwords.words(‘english’) + punctuation + [‘RT’, ‘via’,’…’]
but its not working. Thanks in advance.
Hi Marco,
I have been messing around with the above code to look for co_occurence with three words.
(this gives a bit more insight…take if you collected @RealDonaldTrump tweets, “regret voting for” or “make america great” can give insight into a negative on positive connotations to these tweets)
Can you give me any code snippets? or hints?
Thanks,
Sam
Hi Sam, did you have a look at trigrams (or bigrams, or n-grams)?
Cheers
Marco
Hi Marco,
Thanks for your great work.
About Term co-occurrences, why don’t you use bigrams and collections from previous post?
Iterating for each word combination seems to waste lots of resources?
Best wishes,
Yang
Hi Yang, that’s also an option but it would limit the co-occurrences to words that you see next to each other rather than in the same tweet (context) as a whole, which is what we want to capture here.
Cheers
Marco
Great tutorial. The code to caculate co-coccurence is too slow for large dataset, is there any improvement for such caculations? Thanks!
|
https://marcobonzanini.com/2015/03/23/mining-twitter-data-with-python-part-4-rugby-and-term-co-occurrences/
|
CC-MAIN-2021-31
|
refinedweb
| 1,233
| 54.52
|
Creating custom UI alongside UI libraries can be daunting, but CSS doesn't need to be scary! Come learn how to help you and your dev team conquer UI fears.
Hello! My name is Alyssa. I’m the Angular Developer Advocate for Kendo UI at Progress. I love Angular and frontend development. I’m super passionate about creating and helping others to create ✨FABULOUS✨ user interfaces. I work from home and share an office with my husband who is also a frontend, CSS-loving geek like myself.
Recently, during a team meeting, I overheard one of my husbands co-workers state:
“We can’t create that feature, our UI library doesn’t have that component.”
“We can’t create that feature, our UI library doesn’t have that component.”
Of course, my first thought was “this person is coo-coo bananas.” 🍌 However, after talking with more and more developers, it seems they are not the only one of this mindset. Whether the task of creating something custom is just too daunting or they don’t believe they have time for something they believe will take forever, this mentality of “It’s not pre-built so I can’t do it” is not something new.
I’ve seen this mentality in many under-supported, overwhelmed or even burnt-out developers. Before we can address this pivotal issue we need to have some background knowledge of CSS in order to truly understand the pain points. There seems to be two problems. One is being able to create the JavaScript and HTML that would reflect the functionality of the feature, and the other is creating the CSS that is responsible for the overall look-and-feel / UX of the whole app.
After talking with many different developers, the second problem seems to be the crux of this mental hold-back. I want to start with some CSS specific knowledge to try to level set and show that CSS isn’t as scary as we all think. This understanding will lay the groundwork for enabling yourself and other developers to truly create the impossible on the web.
It's all about that pesky and beautiful first letter in "CSS" that makes it unlike any other language developers work with. CASCADING! CSS is a way to list out styles for elements on the page.
button {
background: rebeccapurple;
}
What happens though, when there are competing rules to style the same elements differently?
button {
background: rebeccapurple;
color: white;
}
button {
background: deeppink;
}
Well, in the above case, the button would first have the background color of rebeccapurple applied and then as the browsers moved down the stylesheet, it would apply the 8th line of styles and change the button’s background to have a color of deeppink. This all happens before the user sees it, so the page would simply load as deeppink.
rebeccapurple
deeppink
The cascading part only takes you so far though with your CSS, the other (and main) gotcha is ✨SPECIFICITY!✨ In our previous example, the second style rule for button background overrode the first. However, by being more specific on our first selector, we can have rebeccapurple load in as our background color, without needing to move the order of our CSS around at all.
button.submit {
background: rebeccapurple;
color: white;
}
button {
background: deeppink;
}
See the Pen
CSS Cascading & Specificity by Alyssa Nicoll (@alyssamichelle)
on CodePen.
By adding the class of submit to the first button selector, it will override the later deeppink one. CSS selectors start off worth nothing or not very much (*, h1, button, button:before) and work their way up to almost impossible to override (inline, !important). Una Kravets did a short and wonderful podcast episode to explain CSS specificity. She goes into much more detail than I do and even lists out examples to walk your through it. It’s really worth a listen if you don’t have this concept fully down yet. I also created this handy-dandy chart to help us keep things straight:
submit
I also found these definitions to be super helpful:
Pseudo Element vs Class?
Pseudo Element: represents something physical :before
Pseudo Class: represents something stateful :active
Additional things to note about CSS specificity:
This resource is a wonderful guide to CSS specificity. These are good tools to bookmark and share with developers who might also know the CSS struggle is real. (Check out my curated list below for all the 💪 powerful resources gathered in one location!)
In my previous job I was tasked with creating custom UI often. The project I was working on was built with Bootstrap. So there were many times in my styles that I had to do things like this:
I had to duplicate code in different places in order to override bootstraps styles. This is because of something I like to call specificity wars.
When styles (whether your own or from a library/framework you have included) are either too generic or too specific, they become very difficult to work with and create custom UI alongside.
On the project I was previously working on, the one that I mentioned before, I was tasked with building out a pop-up panel for clients to custom a design that would be printed out on fabric. I started creating and styling my panel and when I thought I had finished, I loaded up the browser and saw this:
Everything looked as I expected so far, except this large gappy-gap happening on the bottom:
This, of course, was happening because Bootstrap had already styled anything with a class of .panel to have a margin-bottom of 20px (along with other styles).
.panel
The real issue here is that Bootstrap added their styles to the very generic class of .panel. A better and more friendly way, would be to put their styles on a namespaced class like .bootstrap-panel. This might seem nit-picky, except that .panel is not the only generic class or namespace that Bootstrap claims, rather, there are many, such as .label and .legend that they claim as well:
.bootstrap-panel
.label
.legend
This requires you to be increasingly more specific and isolating your styles away from the application’s. I don’t believe we should drift away from the cascade that CSS employs, rather, I think we should use it wisely by only ever being as specific as we need to be.
The other side of the specificity wars is having styles that are too specific.
Remember how we talked about specificity above? The !important flag is the most specific and most harmful thing you can put into your styles. It is very difficult and painful to override something with the code>!important flag. The last time I checked (3/25/2020), Bootstrap has over 1,000 code>!important flags in their CSS.
!important
Put simply, this means that if you are using something like Bootstrap that is being very specific with their styles, it is going to be very difficult to create custom styles alongside theirs. By no means impossible, just painful.
It doesn’t always need to be this painful though, having a UI library does not mean you must be as specific as possible. Kendo UI, for example, is a HUGE component library with over 70 components. When searching through the repos for Kendo UI, component by component, the closest thing to !important that I could find were comments about important things. ;) It has been so much easier, while working with Kendo UI for Angular, to create custom UI, than my experiences with Bootstrap. I know they Kendo UI does have a few important tags, but they try to be very conscious of when and where they are using them. You can have a well structured and robust UI library without becoming too specific and making customizations impossible.
This doesn’t leave us with much wiggle room, I realize. On the one hand, when you are creating styles of your own, you don’t want to be too generic—which could leave you with clashing styles and unexpected results. You do, of course, want to have generic enough to create a foundation that all your other styles build upon. However, being super generic all over the place will only lead to pain.
On the other hand, you don’t want to be so super specific that you require !important flags all over the place to get anything to apply. As with many things in life, balance is required here.
Often times, when looking into why my fellow developers are fearful of a CSS assignment, it is because they are unaware of the power and truly limitless nature of CSS. I believe that by simply being aware of what is possible with CSS and keeping up-to-date with what people are creating with it, can open pathways in your brain that were previously shut closed.
A great example is something that people often reach to UI libraries for, the very scary GRID!! 👻 Which, in all honesty, is not that scary at all. Here is a codepen where I have recreated a Grid layout with about 35 lines of CSS.
See the Pen
CSS Grid Layout by Alyssa Nicoll (@alyssamichelle)
on CodePen.
The bulk of tackling something scary like this is understanding how to create a Grid layout either using Grid or Flexbox. Check out this Codepen to see me use Flexbox and CSS Grid side-by-side for comparison. A wonderful resource that I prefer for learning and reminding myself about CSS Grid is learncssgrid.com.
So instead of pulling in a library with thousands upon thousands of lines of CSS, if all you need is a Grid, it might be best to build your own:
.row {
display: grid;💥
grid-auto-flow: column;💥
}
Another step your team can take to help with sustainability and longevity of your styles is to create a design system to guide any new UI. At FOWA 2013 in London, Mark Otto described a design system as:
Design System: everything that makes up your product
Design systems are meant to include EVERYTHING about your brand. This includes things like typography and layout all the way to things like coding conventions and full blown style guides. Design systems are a place to keep all of this information so the team can reference it, learn from it and keep the brand growing and thriving.
A really neat Design System to check out is by Salesforce:.
Kendo UI actually for Angular actually integrates really well with design systems and, in fact, implements its own with Themes!
The Default theme is technically our own Design System, Bootstrap is one we integrate with, and Material is of course tied to Material Design. I think having a design system in place can really help you elevate your components and your UI to a professional level.
Creating your own Design System, however, can be a bit daunting. This is a large task that while being incredibly useful, will take an incredible amount of energy to get it right. So why not start small and create a very helpful and relatively easy-to-get-started-with piece—A Style Guide.
Here is an example Style Guide my husband created for an internal application. This was on their internal site where other developers could go and get a quick reference for creating things like buttons and forms, to make sure any newly created UI remained consistent.
Lastly, I recommend creating a curated list of CSS and Design resources for your team. This will not only include your own style guide or design system, but also have courses, blog posts, and codepens that your developers can reference in times of great CSS peril. Here is my own list to get you started: Alyssa’s Powerful 💪 CSS Resource List
To sum up, it can be very difficult and daunting to create custom UI. There are many issues developers face, from over-specific libraries that are already included in the project, to complicated and unknown things in CSS itself. The best advice I can give to any manager of a dev team or individual developer is this:
By all means, do not rewrite the wheel, lean on the work of others. But we should never let the difficult task of creating custom UI alongside existing styles or libraries limit what we create for our clients. Create a system, a style guide, and a list of resources that your team can go to. Always encourage those around you to reach for the truly creative, keeping your users needs first, above our own fear and inconvenience.
Alyssa is the Angular Developer Advocate for Kendo UI. If you're into Angular, React, Vue or jQuery and also happen to love beautiful and highly detailed components, check out Kendo UI—and you can find the Kendo UI for Angular library here or just jump into a free 30 day trial today. Happy Coding!
|
https://www.telerik.com/blogs/empowering-developers-to-create-custom-ui
|
CC-MAIN-2020-34
|
refinedweb
| 2,156
| 60.04
|
Hi.
I’m using multiprocessing to train a model. I have two processes taking batches from queues (I’m actually using mp.Arrays that are more efficient, but a little trickier) and sharing my model weights.
Every ten batches seen, they make a validation iteration. In a trainBatch() method and in a valBatch() method, I placed calls to model.train() or model.eval(). They go something like this :
def trainBatch(model, input, criterion, target, optimizer): optimizer.zero_grad() model.train() output = model(input) loss = criterion(output, target) loss.backward() optimizer.step() return loss def valBatch(model, input, criterion, target): model.eval() with torch.no_grad(): output = model(input) loss = criterion(output, target) return loss
I’ve observed that without putting those calls, and therefore letting every iteration with train(True) (be it train or validation), the metrics are “rational”. But When I enable those calls, they become really weird, as can be seen here : (out_var and out_var_val are the variance of the logits, which I like to plot for diagnosis)
(orange : the curves while calling model.eval() and model.train(), and in blue without those calls)
I wonder if the problem doesn’t stem from the fact that, in such a framework, there can be a conflict on the layers with two opposite calls. Almost always : mode.eval() will be called while the model is doing a training iteration.
I wonder if this doesn’t screw the training of BatchNorm layers entirely, since the metrics in validation seem to be so poor. Or maybe, calling model.eval() while a training is happening deactivates the BatchNorm entirely without altering the learning ?
What do you think is happening ? Is validation during training prohibited in multiprocessing ?
|
https://discuss.pytorch.org/t/conflict-between-model-eval-and-train-with-multiprocess-training-and-evaluation/21380/3
|
CC-MAIN-2022-33
|
refinedweb
| 281
| 50.63
|
Things You'll Need:
- Visual Studio 2003 for asp.Net 1.1
- Visual Studio 2005 for asp.Net 2.0
- Enom Web hosting account
- FTP Client
- Step 1
Download the source code for the MySQL Connector/Net from
Note the Version 1.0 is for the connector, not for .Net version.
- Step 2
Extract the contents of the zip file to a local directory.
- Step 3
Open mysql.csproj project file in Visual Studio.
- Step 4
Open the AssemblyInfo.cs file, and add the following code, in the using block, at the top of the file (if it is not already there):
using System.Security;
- Step 5
Add the following code to the assembly section of the file:
[assembly: AllowPartiallyTrustedCallers]
- Step 6
Recompile the dll.
- Step 7
You may now reference this dll from other projects. When you decide to publish your project to your hosting server, you need to upload this modified version of the dll to your bin directory.
- Step 8
If this is too much, you can download precompiled DLLs direct from eNom's control panel.
MySQL .NET Connector v1.07 for .NET 2.0
MySQL .NET Connector v1.07 for .NET 1.1
|
http://www.ehow.com/how_2003953_recompile-mysql-library-run-under-medium-trust-environment-enom-web-hosting.html
|
crawl-002
|
refinedweb
| 197
| 76.11
|
I've written a data converter in C++. The output today is fixed length records in ASCII TEXT format. I want to add support for XML output.
I've created XML documents in the past, but I've always taken a hacker's approach. By this I mean I simply remember the nesting level of the current set of tags I'm working with, and create subroutines where I'll pass a tag or data value and it will get wrapped in the appropriate delimiters ( "<>" & "</>"), be indented for ease of human reading, and be output. So, essentially, I've separated the data from the formatting, and only apply the formatting at output time.
However, I suspect there might be libraries of functions that exist for outputting in XML format where I can define, perhaps, the namespace and then pass (essentially) raw data to it and it will do all the output formatting for me. Do such libraries exist?
Thanks, Todd
|
http://cboard.cprogramming.com/tech-board/98539-creating-xml-file.html
|
CC-MAIN-2015-06
|
refinedweb
| 160
| 60.55
|
In a quest for groovy graphics, we’ve integrated D3 support in the blog. Stay tuned for more to come…
This graphic was created by Mike Bostock.
We're using the WP-D3 WordPress plugin
A resource for DIY / tech / design
In a quest for groovy graphics, we’ve integrated D3 support in the blog. Stay tuned for more to come…
This graphic was created by Mike Bostock.
We're using the WP-D3 WordPress plugin
This project explores human-controlled robotics. We manipulate a robotic arm in real-time using five-finger gestures, with a force-sensitive touch interface.
This project was created for two reasons: First, is to evaluate Sensel’s new force-sensitive touch interface. Second, is a general exploration of a magical sensation; The use of touch gestures to animate a robot.
There are four hand gestures that control the robot arm’s movements:
Force down:
Rotation:
Pinch:
Forward / Backward:
To make this project happen, the following items were used:
You can get this project up and running in no time! The code is located here.
First, program the Arduino with the “sketch” code. Next, run the Processing application, and control the robot with five fingers!
Sensel is designing next-generation input devices. I’ve had the pleasure of working with their pressure-sensitive, multi-touch interface in this project. The video below is a promo teaser for their upcoming crowd-funding campaign launch.
Download the board design files (“Gerber files”) and use them to order PCBs from the amazing OSH Park.
Solder up some boards, inhale fumes, and get a buzz..
If you don’t already own a soldering iron, may I suggest the most excellent Hakko FX-901
Use Arduino with the Teensyduino extension to program this code onto your Teensy 2.0
#include <DmxSimple.h> void setup() { Serial.begin(115200); Serial.println("SerialToDmx ready"); Serial.println(); Serial.println("Syntax:"); Serial.println(" 123c : use DMX channel 123"); Serial.println(" 45w : set current channel to value 45"); DmxSimple.usePin(0); } int value = 0; int channel; void loop() { int c; while(!Serial.available()); c = Serial.read(); if ((c>='0') && (c<='9')) { value = 10*value + c - '0'; } else { if (c=='c') channel = value; else if (c=='w') { DmxSimple.write(channel, value); Serial.print("channel:"); Serial.println(channel); Serial.print(" value:"); Serial.println(value); } value = 0; } }
The adapter receives commands in the following format:
ex: 123c 45w
This example sets channel 123 to value 45.
Plug the Raspberry Pi into your existing network through a wired connection. Use Adafruit’s Pi Finder to locate and access the terminal of your Pi.
sudo apt-get install hostapd udhcpd
The version of hostapd that we just installed does not support the Edimax dongle’s hardware. We need to replace some files, to make it compatible.
wget unzip hostapd.zip sudo mv /usr/sbin/hostapd /usr/sbin/hostapd.bak sudo mv hostapd /usr/sbin/hostapd.edimax sudo ln -sf /usr/sbin/hostapd.edimax /usr/sbin/hostapd sudo chown root.root /usr/sbin/hostapd sudo chmod 755 /usr/sbin/hostapd
sudo nano /etc/hostapd/hostapd.conf
Edit the file so that it contains the following text:
interface=wlan0
driver=rtl871xdrv
ssid=Raspi_wifi
hw_mode=g
channel=11
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_passphrase=somepassword
wpa_key_mgmt=WPA-PSK
wpa_pairwise=TKIP
rsn_pairwise=CCMP
Hit Control-X to exit. You will be prompted to save changes.
sudo nano /etc/udhcpd.conf
Edit the file so that it contains the following text:
start 192.168.0.2 # This is the range of IPs that the hotspot will give to client devices.
end 192.168.0.20
interface wlan0 # The device udhcp listens on.
remaining yes
opt dns 8.8.8.8 4.2.2.2 # The DNS servers client devices will use.
opt subnet 255.255.255.0
opt router 192.168.0.1 # The Pi’s IP address on wlan0 which we will set up shortly.
opt lease 864000 # 10 day DHCP lease time in seconds
Hit Control-X to exit. You will be prompted to save changes.
sudo nano /etc/default/udhcpd
Find the following line:
DHCPD_ENABLED=”no”And prepend it with a pound symbol, like this:
#DHCPD_ENABLED=”no”
Hit Control-X to exit. You will be prompted to save changes.
|
http://raykampmeier.com/
|
CC-MAIN-2017-22
|
refinedweb
| 709
| 60.41
|
Visual Studio templates are divided into two major categories: project templates and item templates. This topic explains some of the differences between project and item templates.
Item templates are individual items that a user can add to a project by using the Add New Item dialog box. Examples of item templates included with Visual Studio are:
Windows Form
Code File
XML Schema
Project templates are entire projects from which a user can create a new project by using the New Project dialog box. A project template includes all the files necessary to begin a specific type of project. Examples of project templates included with Visual Studio are:
Windows Application
Class Library
Empty Project
You can use the wizard that creates projects from templates to perform custom actions. For more information, see How to: Use Wizards with Project Templates.
Both item and project templates are stored as .zip files. The contents of the .zip files differ between the different types of templates.
Item template .zip files contain:
The .vstemplate file that contains the metadata for the template.
One or more files to add to a project when an item is instantiated from the template.
Although item templates might only specify one item, sometimes the item contains multiple files. For example, Windows Forms item templates can contain a code file, a designer file, and a resource file. For more information, see How to: Create Multi-file Item Templates.
An optional icon file to appear in the Add New Item dialog box.
Project template .zip files contain:
The project file or Web.config file.
The code files, such as Windows Forms, Web Forms, class files, and resource files.
An optional icon file to appear in the New Project dialog box.
Project and item templates are created and used in different ways. The following table explains how to complete common tasks with project and item templates.
Specifying the template type in the .vstemplate file.
Set the Type attribute of the VSTemplate element to Project.
Set the Type attribute of the VSTemplate element to Item.
Instantiating the template.
Select the template from the New Project dialog box.
With a project open, select the template from the Add New Item dialog box.
Adding assembly references.
Add references to the project before creating the template.
Add references with the References element in the .vstemplate file.
When an item is added to a project, for some languages (such as C#) the $rootnamespace$ parameter can change depending on where the item is added. For example, if an item is added as a child of the project node, then the namespace of the item will be the default namespace set within the project properties. However, if an item is added to a folder within that node, the name of the folder will be added to the project's default namespace.
For example, suppose an item is added to the MyFolder2 node in the following project, and the project's default namespace is MyNamespace:
MyProject
MyFolder1
MyFolder2
The namespace of the item added will be MyNamespace.MyFolder1.MyFolder2
|
http://msdn.microsoft.com/en-us/library/ms247072(VS.80).aspx
|
crawl-002
|
refinedweb
| 507
| 65.42
|
Testing network software with pytest and Linux namespaces
Vincent Bernat. While a rewrite (complete or iterative) would help to make the code more test-friendly, it would be quite an effort and it will likely introduce operational bugs along the way.
To get better test coverage, the major features of lldpd are now verified through integration tests. Those tests leverage Linux network namespaces to setup a lightweight and isolated environment for each test. They run through pytest, a powerful testing tool.
pytest in a nutshell
pytest is a Python testing tool whose primary use is to write tests for Python applications but is versatile enough for other creative usages. It is bundled with three killer features:
- you can directly use the
assertkeyword,
- you can inject fixtures in any test function, and
- you can parametrize tests.
Assertions
With unittest, the unit testing framework included with Python, and many similar frameworks, unit tests have to be encapsulated into a class and use the provided assertion methods. For example:
class testArithmetics(unittest.TestCase): def test_addition(self): self.assertEqual(1 + 3, 4)
The equivalent with pytest is simpler and more readable:
def test_addition(): assert 1 + 3 == 4
pytest will analyze the AST and display useful error messages in case of failure. For further information, see Benjamin Peterson’s article.
Fixtures
A fixture is the set of actions performed in order to prepare the system to run some tests. With classic frameworks, you can only define one fixture for a set of tests:
class testInVM(unittest.TestCase): def setUp(self): self.vm = VM('Test-VM') self.vm.start() self.ssh = SSHClient() self.ssh.connect(self.vm.public_ip) def tearDown(self): self.ssh.close() self.vm.destroy() def test_hello(self): stdin, stdout, stderr = self.ssh.exec_command("echo hello") stdin.close() self.assertEqual(stderr.read(), b"") self.assertEqual(stdout.read(), b"hello\n")
In the example above, we want to test various commands on a remote
VM. The fixture launches a new VM and configure an SSH
connection. However, if the SSH connection cannot be established, the
fixture will fail and the
tearDown() method won’t be invoked. The VM
will be left running.
Instead, with pytest, we could do this:
@pytest.yield_fixture def vm(): r = VM('Test-VM') r.start() yield r r.destroy() @pytest.yield_fixture def ssh(vm): ssh = SSHClient() ssh.connect(vm.public_ip) yield ssh ssh.close() def test_hello(ssh): stdin, stdout, stderr = ssh.exec_command("echo hello") stdin.close() stderr.read() == b"" stdout.read() == b"hello\n"
The first fixture will provide a freshly booted VM. The second one will setup an SSH connection to the VM provided as an argument. Fixtures are used through dependency injection: just give their names in the signature of the test functions and fixtures that need them. Each fixture only handle the lifetime of one entity. Whatever a dependent test function or fixture succeeds or fails, the VM will always be finally destroyed.
Parameters
If you want to run the same test several times with a varying parameter, you can dynamically create test functions or use one test function with a loop. With pytest, you can parametrize test functions and fixtures:
@pytest.mark.parametrize("n1, n2, expected", [ (1, 3, 4), (8, 20, 28), (-4, 0, -4)]) def test_addition(n1, n2, expected): assert n1 + n2 == expected
Testing lldpd
The general plan for to test a feature in lldpd is the following:
- Setup two namespaces.
- Create a virtual link between them.
- Spawn a
lldpdprocess in each namespace.
- Test the feature in one namespace.
- Check with
lldpcliwe get the expected result in the other.
Here is a typical test using the most interesting features of pytest:
@pytest.mark.skipif('LLDP-MED' not in pytest.config.lldpd.features, reason="LLDP-MED not supported") @pytest.mark.parametrize("classe, expected", [ (1, "Generic Endpoint (Class I)"), (2, "Media Endpoint (Class II)"), (3, "Communication Device Endpoint (Class III)"), (4, "Network Connectivity Device")]) def test_med_devicetype(lldpd, lldpcli, namespaces, links, classe, expected): links(namespaces(1), namespaces(2)) with namespaces(1): lldpd("-r") with namespaces(2): lldpd("-M", str(classe)) with namespaces(1): out = lldpcli("-f", "keyvalue", "show", "neighbors", "details") assert out['lldp.eth0.lldp-med.device-type'] == expected
First, the test will be executed only if
lldpd was compiled with
LLDP-MED support. Second, the test is parametrized. We will execute
four distinct tests, one for each role that
lldpd should be able to
take as an LLDP-MED-enabled endpoint.
The signature of the test has four parameters that are not covered by
the
parametrize() decorator:
lldpd,
lldpcli,
namespaces and
links. They are fixtures. A lot of magic happen in those to keep
the actual tests short:
lldpdis a factory to spawn an instance of
lldpd. When called, it will setup the current namespace (setting up the chroot, creating the user and group for privilege separation, replacing some files to be distribution-agnostic, …), then call
lldpdwith the additional parameters provided. The output is recorded and added to the test report in case of failure. The module also contains the creation of the
pytest.config.lldpdobject that is used to record the features supported by
lldpdand skip non-matching tests. You can read
fixtures/programs.pyfor more details.
lldpcliis also a factory, but it spawns instances of
lldpcli, the client to query
lldpd. Moreover, it will parse the output in a dictionary to reduce boilerplate.
namespacesis one of the most interesting pieces. It is a factory for Linux namespaces. It will spawn a new namespace or refer to an existing one. It is possible to switch from one namespace to another (with
with) as they are contexts. Behind the scene, the factory maintains the appropriate file descriptors for each namespace and switch to them with
setns(). Once the test is done, everything is wipped out as the file descriptors are garbage collected. You can read
fixtures/namespaces.pyfor more details. It is quite reusable in other projects2.
linkscontains helpers to handle network interfaces: creation of virtual ethernet link between namespaces, creation of bridges, bonds and VLAN, etc. It relies on the pyroute2 module. You can read
fixtures/network.pyfor more details.
You can see an example of a test run on the
Travis build for 0.9.2. Since each test is correctly isolated,
it’s possible to run parallel tests with
pytest -n 10 --boxed. To
catch even more bugs, both the address sanitizer (ASAN) and the
undefined behavior sanitizer (UBSAN) are enabled. In case of a
problem, notably a memory leak, the faulty program will exit with a
non-zero exit code and the associated test will fail.
A project like cwrap would definitely help. However, it lacks support for Netlink and raw sockets that are essential in lldpd operations. ↩
There are three main limitations in the use of namespaces with this fixture. First, when creating a user namespace, only root is mapped to the current user. With lldpd, we have two users (
rootand
_lldpd). Therefore, the tests have to run as root. The second limitation is with the PID namespace. It’s not possible for a process to switch from one PID namespace to another. When you call
setns()on a PID namespace, only children of the current process will be in the new PID namespace. The PID namespace is convenient to ensure everyone gets killed once the tests are terminated but you must keep in mind that
/procmust be mounted in children only. The third limitation is that, for some namespaces (PID and user), all threads of a process must be part of the same namespace. Therefore, don’t use threads in tests. Use
multiprocessingmodule instead..
|
https://vincent.bernat.im/en/blog/2016-testing-pytest-linux-namespaces
|
CC-MAIN-2017-34
|
refinedweb
| 1,262
| 58.89
|
No oled library for python3?
- ExplosiveSoda last edited by
I have found that you can utilize the Oled with python 2.7 but not with 3. Has anyone figured out how to get it to work? I also found this page in search of a library for 3. As well as their current documentation on it.
@Tyler-Mattioli looks like you'd have to convert Python 2 OLED code (the whole framework, really) to python 3, compile it from that new source. Otherwise, you're stuck doing the things listed here, though they may not work on the embedded systems due to the libraries needed probably not being supported.
- Lazar Demin administrators last edited by
@Tyler-Mattioli @Theodore-Borromeo all of the OLED, PWM, and Relay source code can be found here:
The Python2.7 implementation can be found at:
We haven't had the time to port everything for Python3. If you have the time and initiative to implement libraries please submit a pull request!
@Lazar-Demin Looking at the code, I don't know if there'd be any changes to the c code underlying the python layers, but I'd like to be sure you all have a docker container that allows for easy experimentation, compile and testing. Is there a best practice for compiling what you do have now? I'd want to be sure I can even build the current stuff before I'd ever agree to attempt a Python 3 translation. ( I don't even use python 3. . ., but I guess there's always a first time)
- Lazar Demin administrators last edited by
@Theodore-Borromeo We don't have such a Docker container at the moment. We will evaluate this and then get back to all you guys with a roadmap when we're largely done with the crowdfunding campaigns!
For the short-term, you can use the i2c-exp-driver package Makefile from our package repo:
in a regular Lede buildroot. It should compile without any issues.
@Lazar-Demin If that's what you have, then I already have an omega2 plus lede buildroot within a docker container. . .
Looking for where or how I'm supposed to get the file at:
#include <Python.h>
I assume this is a file provided for creating Python interfaces in C, no? If so, what packages in LEDE were used to satisfy the programming dependencies for python2? I would need to replicate that for python3, and then install the necessary python3 dev packages too, right?
I'm looking in:
and don't see the necessary includes I would need to do a compile switching to python3. . .
I am also presuming that:
from OmegaExpansion import pwmExp
is built from:
I can't seem to find how or where you all built that python library. Is there something I'm not seeing?
Ugh. Python.h is the C API that should be part of the dev packages for python2. It won't be in the source tree (at least it shouldn't!) for i2c-exp-driver.
I also figured out that the openwrt packages repo dictates that the OmegaExpansion module in
from OmegaExpansion import pwmExp
is just a directory in the python install path. . . It now makes sense as to why I couldn't figure out where in the makefile you were building that module (turns out you werent', it's just the directory!)
With all that said, it seems that a good first tack would be to take the python version definition in the makefile in packages tree, and propagate that down into the python library linkages while building the python c wrapper libs. Right now, we are statically linking to python2 in the pwmExp libs, which will definitely NOT work with python3, even if Python.h defines an API which should still be usable by python 3 or python 2 without much issues. . .
- Michael Klug last edited by
I would really appreciate a python3 version of the library as I'm trying to build a home automation solution around python3 at the moment.
|
https://community.onion.io/topic/1223/no-oled-library-for-python3/3?lang=en-GB
|
CC-MAIN-2022-40
|
refinedweb
| 676
| 69.92
|
PIL Image and Closing DataStreams/ImageFile
good day Pythonistas!
I was having an issue with
Image|
pillow|
PIL(your choice 😅) with File Context for
Image.open(),
Image.load()and
Image.close().
I was reciveing 19 warnings for: ↴
__warningregistry__[("unclosed file <_io܂BufferedReader", <class 'ResourceWarning'>, 8)]
- This Warning was located at the end (bottom) of the Console Inspector
Example of my use at the time ↴
from PIL import Image img = Image.open('my_image.png') img.load() # This will have been my first issue besides bad practice img = img.resize((256, 256), 1) # The 1 in pos2 is for AA, original was 3600x3600 img.save('resized.png', 'png') # pos2 not needed for str type filename but good practice
Example Corrected
⒈ General Implementation ↴
from PIL import Image with Image.open('my_image.png') as img: img = img.resize((256, 256), 1) img.save('resized.png', 'png')
⒉ Dated (I believe) Implementation ↴
from PIL import Image try: img = Image.open('my_image.png') img = img.resize((256, 256), 1) img.save('resized.png', 'png') finally: img.close()
⒊ Alternative (this is what I went with) Implementation ↴
from PIL import Image try: with open('my_image.png', 'rb') as f: img = Image.open(f) img = img.resize((256, 256), 1) img.save('resized.png', 'png') finally: f.close() img.close()
This is how I understand whats going on..
img.open(fn)Opens the image BUT does not load any of the data
img.load()Loads the pixel data to the stream
Calling on an operation to the Image Object the first time ALSO load pixel data to the stream. In this case
img.resize((w, h), filter)
img.save(fn, format)Closes Image.
problem was both
img.load()and
img.resize((w, h), filter)each loaded thier own separate DataStream.. i assume the auto-implemented steam was the primary.
###Finally
If your running your
PILImage through operations... one shouldn't call
load()explicitly if you working with Single Paged Images.
This worked wonderfully... till i found that ☝︎ still remained...
my_image.tiff...
Turns out that
"tiffformat will always be treated as multipage..
" 𝄆𝄞♫♪⁼𝄫 ... should of read the brochure 🧐..."
I tried many Approches to this part.. And after hours of wonderfull Warning Messages, decided i didnt NEED the
tifffile so i changed format to
pngand problem "disapeared"..
According to everything I gathered, both of my issues are very common while using
Pillacross the whole
Pythoncomunity. so I came here fore hopes that my first segment will help someone and that even though I removed my problem I would like to try to understand what is happening with the
tiffinside
PILto produce such an issue..
Thank You!
Here is my current snippet for anyone that wanted or needed ↴
class Loop(scene.Scene, metaclass=GCore): def __init__(self, *args, **kwargs): s.Scene.__init__(self, *args, **kwargs) super().setup() GCore.PopulateCache(self) def setup(self): try: ``` cache starts with allnthe values being Str paths pulled from .txt file. and then here wevchange that to out Texture objects to use in the next initialization stage. ``` for k,v in self.cache.items(): with open(v, 'rb') as f: img = Image.open(f) img=img.resize((int(img.size[0]/2*self.ScaleMod), int(img.size[1]/2*self.ScaleMod)), 1) ``` Convert Image back to Byte Data so we can implement the scale for Retina Screens with from_data in ui.Image ``` iodata = io.BytesIO() img.save(iodata, 'png') ns.cache[k]=scene.Texture(ui.Image.from_data(iodata.getvalue(), scene.get_screen_scale())) ``` finally close all image and datastream objects. then add the scene.Texture to the Cache Dict. ``` finally: iodata.close() f.close() img.close() del img del f del iodata for x in locals(): print(x)
@stephen, I think your examples share an issue where you assign the resized and copied image to a variable with the same name as the original image, thus losing the reference to the original and then closing the wrong thing.
I see what your saying but what you cannot see is originally (and before i fixed the duplicate streams) i had somthing similar to this:
img = Image.open('my_image.png') img.load() img_rs = img.resize((256, 256), 1) with io.BytesIO() as iodata: img_rs.save(iodata, 'png') texture=scene.Texture(ui.Image.from_data(iodata.getvalue(), ... ...
i changed it to img=img... to reduce code knowing ill never need that exact ref to original again during this loop session.reason is i place the Texture object in a cache dict and from here on is called from there 🤓🤓
##EDIT
@mikael
I also forgot to include my
finallyblock that handles any mishaps once caching is complete..
finally: iodata.close() f.close() img.close() del img del f del iodata for x in locals(): print(x)
|
https://forum.omz-software.com/topic/6264/pil-image-and-closing-datastreams-imagefile
|
CC-MAIN-2020-29
|
refinedweb
| 783
| 60.41
|
Firstly I would like to explain what is a managed semaphore. This was introduced to me during a session by Andy Clymer (developmetor). Basically the requirement of a semaphore is pretty clear but then again we don't have to resort to kernel objects and mutexes if not required to step out from our little managed world. Hence the concept of managed semaphore. Basically the idea is to allow only a certain number of threads to any critical section. This enables the number of parallel threads executing a block of code to the max count of the semaphore.
Let look at a simple piece of code doing this.
We would like to do something like this.
semaphore.Acquire()
try{
... // throttled code
...
}
finally{
semaphore.Release();
Now lets see a class that would enable use to do something like this.
public class Semaphore { private int current; private int max; private object semLock = new object();
public Semaphore(int max) { this.current = 0; this.max = max; }
public void Acquire() { bool acquired = false;
lock (semLock) { Debug.Assert(current <= max, "Current has gone out of range..");
while (current >= max) { Monitor.Wait(semLock); }
current++; acquired = true; Debug.Assert(current <= max, "Current has gone out of range.."); } }
public void Release() { lock (semLock) { Debug.Assert(current <= max, "Current has gone out of range..");
current--;
// Mark a single thread that has called Monitor.Wait(semLock) // as ready to run... // only run it when it can be granted the semLock Monitor. Monitor.Pulse(semLock);
Debug.Assert(current >= 0, "Current has gone out of range..");
} } }
Now that we have a class that can enable throttling we need to figure out how to enable resource based throttling.
This implementation would suffice if the resource itself is capable of parallelism. Assume writing to a file or a port. Now we sometimes have scenarios where only one thread can write to a file. But we might need multiple threads to write to different files/ports. Think of a simple processor where we are sending out messages to different ports but we can have only on thread sending out messages to a particular port.
Hence we have the resource itself usage mutually exclusive but different instances of the resources can be accessed in parallel.Now to write code like this would could end up writing double locks which is quite messy code in itself and can end up in deadlocks if you don't acquire locks in the correct sequence. Would like to call out the fork and knifes problem. (lets leave this for another post).
Thinking about this lead me to write a named semaphore. Basically the idea was to keep a dictionary of the resources currently accessed.
If a resource makes a request we give it a lock. This is under the condition that there is no other thread owning the current resource. This also checks if the current number of locks obtained are <= the max count hence checking our throttling count. Basically to put it in a nutshell. A thread can acquire a lock if there are enough number of available semaphores and there is no parallel thread who has acquired the lock for the current request.
To put this down at one shot we can just rewrite the semaphore class to have a dictionary and a count. and this is what I ended up with.
public class KeyBasedSemaphore<T> where T : struct { Dictionary<T, object> locks = new Dictionary<T, object>(); object thisLock = new object(); int max = 0; int current = 0;
public KeyBasedSemaphore(int maxCount) { this.max = maxCount; }
public void Acquire(T key) { lock (thisLock) { Debug.Assert(current <= max, "Current has gone out of range..");
bool acquired = false; while (locks.ContainsKey(key)) { Monitor.Wait(thisLock); }
locks.Add(key, new object());
//Wait if there are already max //number of instances running. while (current >= max) { Monitor.Wait(thisLock); }
acquired = true; current++; Debug.Assert(current <= max, "Current has gone out of range.."); } }
public void Release(T key) { lock (thisLock) { Debug.Assert(current <= max, "Current has gone out of range..");
//Remove the lock on the instance and //Decrement the counter. locks.Remove(key); current--;
//Pulse any waiting processes. Monitor.Pulse(thisLock);
Debug.Assert(current >= 0, "Current has gone out of range.."); } } }
OK I admit its dirty in its own sweet way. I send this and pretty much got a request to refractor it. Basically Andy gave a very clear suggestion and asked me as the code would become clearer it I split out the intent. Primarily what we needed to do was the separate the semaphore with the named set of locks.
This is primarily what we need to achieve.
Obtained a named monitor ( name = file or port etc ) Obtain the Semaphore Do Activity Release Semaphore Exit named monitor
Obtained a named monitor ( name = file or port etc )
Obtain the Semaphore
Do Activity
Release Semaphore
Exit named monitor
Well this is from Andy
"I would therefore be inclined not to modify the Semaphore class to produce the required behaviour but to create two new classes..Produce a three class solution using the original semaphore class, and then a class to encapsulate the new synchronization pattern and a second to implement named monitor functionality using a monitor.
As for performance, in the implementation you supplied If a thread is currently in a wait state waiting for access to the “port/file” it is being woken up every time a release is made irrespective if its for the resource it is interested in, by refactoring as above you could make it so they only woken up when the actual resource they are after becomes available. Having said that my way means using more objects for synchronization and that cost may out way any benefit..It will almost certainly depend on the behaviour of the application logic, if there is high contention for a given resource I think my way would be more optimal if not then using a single synchronization object may very well be quicker..
So unless performance is absolutely crucial I still like the idea of keeping semaphore behaviour and the exclusive access behaviour in two separate classes, wrapped by a third to implement your required synchronization pattern, because it makes the intent clearer and allows the re-use of common components."
And finally the the named monitor class that Andy refactored it out to.
class NamedMonitorCollection<K> { private class NamedMonitor { public int usageCount = 0; }
private Dictionary<K, NamedMonitor> namedMonitors = new Dictionary<K, NamedMonitor>();
public void Enter(K name) { NamedMonitor monitorToAquire = null;
lock (namedMonitors) { // Is named monitor currently in use, if not create an entry if (namedMonitors.ContainsKey(name) == false) { namedMonitors[name] = new NamedMonitor(); }
monitorToAquire = namedMonitors[name];
// Register threads interest in this monitor monitorToAquire.usageCount++; }
// Attempt to aquire the appropriate monitor Monitor.Enter(monitorToAquire); }
public void Exit(K name) { lock (namedMonitors) { Debug.Assert(namedMonitors.ContainsKey(name), "This monitor is unknown");
Monitor.Exit(namedMonitors[name]);
// Un register interest in the named monitor // if no interested parties then remove it from the list // of known named monitors namedMonitors[name].usageCount--; if (namedMonitors[name].usageCount == 0) { namedMonitors.Remove(name); } } }
/// <summary> /// IEnumerable method /// </summary> /// <returns></returns> internal IEnumerable<K> GetMonitorsInUse() { lock (namedMonitors) { foreach (K monitor in namedMonitors.Keys) { yield return monitor; } } }
public int inUseCount { get { lock (namedMonitors) { return namedMonitors.Count; } } }
}
Now if we look at the code above we are surely making more locks but the code is much more clearer. The overheads can come into play only if we have a large number of resources waiting for threads and this would end up in a large number of locks and contentions. On the other hand the first implementation still works with only one lock and hence might have minor improvement. Then again we can write everything in assembly.
Hope this might help in some ideas in doing interesting parallel processing in a purely managed world :)
PingBack from
|
http://blogs.msdn.com/b/sajay/archive/2007/10/04/throttling-using-managed-semaphores-and-named-monitors.aspx
|
CC-MAIN-2014-52
|
refinedweb
| 1,297
| 56.25
|
ROT13 is a simple encryption method. It shifts each character of the clear text string 13 positions forward in the alphabet.
This Python one-liner does ROT13 encryption for you:
cleartxt = "berlin" abc = "abcdefghijklmnopqrstuvwxyz" secret = "".join([abc[(abc.find(c)+13)%26] for c in cleartxt]) print(secret) # oreyva
Note:
cleartxt is the string we want to encode and must not contain spaces, numbers or upper case letters.
Don’t worry if this seems confusing. We’ll explain it all in detail below!
To encode strings containing spaces and upper case letters, use the built-in codecs library.
As an alternative, you can also use the following Python library call — that does ROT13 encryption for you:
import codecs codecs.encode(phrase, 'rot_13')
(Reading time — 12 minutes, or watch the video!)
ROT13 Video Tutorial!
Now, let’s answer an important technical question:
What is ROT13?
The ROT13 algorithm is a simple encryption algorithm. It is used on many forums (e.g. Reddit) to prevent spoilers – or to hide the details of a conversation from newbies.
ROT13 is so simple that it provides almost no security. But if you understand it, you’ll finally be able to decipher those insider conversations on Reddit.
The algorithm can be explained in one sentence. ROT13 = Rotate the string to be encrypted by 13 positions (modulo 26) in the alphabet of 26 characters.
If you want to encrypt a string, shift each character forwards by 13 positions in the alphabet. If you move past the last character “z”, you start over at the first position in the alphabet “a”.
What Are ROT13 Implementations in Python?
First, I’ll give you an easy-to-understand implementation of the ROT13 algorithm. Then, I’ll provide you with a Python one-liner. Finally, I’ll give you the library call for ROT13 encryption. Use the version you prefer.
So check out this ROT13 algorithm without using libraries. Read the code carefully, because I will ask you a question about it in a moment.
def rot13(phrase): abc = "abcdefghijklmnopqrstuvwxyz" out_phrase = "" for char in phrase: out_phrase += abc[(abc.find(char)+13)%26] return out_phrase phrase = "xthexrussiansxarexcoming" print(rot13(phrase)) # kgurkehffvnafknerkpbzvat print(rot13(rot13(phrase))) # What's the output?
The last print statement showcases a nice property of the algorithm. But which one?
The solution is the output
"thexrussiansxarexcoming". This is because rot13 is its own inverse function (shifting by 13+13 positions brings you back to the original character in the alphabet).
An advanced coder will always prefer the shortest and cleanest way of writing Pythonic code. So, let’s rewrite the ROT13 algorithm as a Python one-liner.
abc = "abcdefghijklmnopqrstuvwxyz" def rt13(x): return "".join([abc[(abc.find(c) + 13) % 26] for c in x]) print(rt13(rt13(phrase)))
We create a list of encrypted characters via a list comprehension. If you need a refresher on list comprehension, check out our comprehensive blog tutorial.
We then join this list together with the empty string to get the final result. In the list comprehension, each character,
c, from the original string,
x, is encrypted separately. For each
c, we find its position in the alphabet with
abc.find(c). Then we add 13 to this position. So
'z' returns index 25 and 25 + 13 = 38. But there is no 38th letter. Thus we use the modulo operator (
% 26) to make sure our values are in the range 0 – 25 (Python indexes start at 0).
To encrypt character ‘z’, the algorithm shifts its index 25 by 13 index positions to the right. It takes the result modulo 26 to get the final index of the encrypted character. This prevents overshooting by restarting the shift operation at index 0. It results in the following shift sequence: 25 > 0 > 1 > … > 12.
Alternative solution: one of my “Coffee Break Python” email subscribers, Thomas, came up with an alternative solution that is fast and easy to read.
def rot13(phrase): key = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" val = "nopqrstuvwxyzabcdefghijklmNOPQRSTUVWXYZABCDEFGHIJKLM" transform = dict(zip(key, val)) return ''.join(transform.get(char, char) for char in phrase)
The idea is to “hardcode” the mapping between keys and values. This is quite “tedious” programming work. But it’s a perfectly valid solution to the ROT13 algorithm (and it works for uppercase letters, spaces, numbers and symbols too!).
Note that it does not encode non-letter characters. It simply returns them:
>>> rot13('Hello World!!') 'Uryyb Jbeyq!!'
Is There a Library for ROT13 in Python?
Yes! It’s a built-in library called codecs. Using the ROT13 algorithm with the library is simple. Just import the library and call the encode function.
Here is an example:
import codecs phrase = "The Russians are coming!" # Apply twice to get back original string print(codecs.encode(codecs.encode(phrase, 'rot_13'), 'rot_13')) # The Russians are coming! print(codecs.encode('hello', 'rot_13')) # uryyb
The encode function from the codecs library takes up to three parameters. The first parameter is the string object to encode. The second parameter is the encoding scheme (default: ‘utf-8’). The third parameter allows you to customize error handling. In most cases, you can skip the last parameter and use the default error handling.
What Are the Applications of the ROT13 Algorithm?
The ROT13 algorithm is easy to decrypt. An attacker can easily crack your code by running a probabilistic analysis on the distribution of the letters in your encrypted text. You should never rely on this algorithm to encrypt your messages!
So, what are the applications of the ROT13 algorithm?
- Obscure potentially offensive jokes in online forums.
- Obscure the result of puzzles in online forums.
- Obscure possible spoilers for movies or books.
- Make fun of existing (weak) encryption algorithms: “56-bit DES is stronger than ROT13”
- Obscure email addresses on websites against not very sophisticated email bots (the 99%).
- Use it as a game to find phrases that make sense in both forms, encrypted or decrypted. Examples: (png, cat), (be, or).
- ROT13 is a special case of the popular Caesar cipher. ROT13 serves as an educational tool to explain it. (Example)
In summary, ROT13 is more a fun encryption method that has been a popular running gag in internet culture.
As a user on StackExchange describes it:
“So a lot of things that did mild obfuscation used ROT13, because it was commonly available, and so it’s been backported into a number of more modern languages. It’s just a weird quirk.”
How is CAPITALIZATION Handled?
The encode function of the codecs library handles capitalization for you. If you apply ROT13 to an uppercase letter, it will remain uppercase after the encoding. If you apply ROT13 to a lowercase letter, it will remain lowercase.
Here is an example:
import codecs print(codecs.encode('Hello', 'rot_13')) # Uryyb
How is ROT13 Related to the Caesar Cipher?
The Caesar cipher is the generalization of the ROT13 algorithm.
“It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet.”Wikipedia
As you can see, ROT13 does nothing but fix the “number of positions down the alphabet” to +13.
Why do we shift the original text, called ‘cleartext’ or ‘plaintext’, by 13 positions and not another number? It ensures that applying the encryption twice returns the original cleartext. Hence, you do not have to define two separate methods for encryption and decryption – one method to rule them all!
This is not the case if you use any other number. If you shift the cleartext by 5 positions, ROT5, and apply it twice, you get ROT10 encryption (5+5=10).
Online Tool for ROT13 Encryption and Decryption
To encrypt your own cleartext, simply replace the string value of the variable ‘clear_text’ with your personal string.
Click to visit the interactive tool to obfuscate your own texts with ROT13.
What Are the Alternatives to ROT13?
Most alternatives are stronger than ROT13. Here are a few of them:
- Triple DES
- RSA
- Blowfish
- Twofish
- AES
If you want to dive deeper into these alternatives, check out this article that briefly describes their ideas.
What Are Examples of ROT13?
Here are examples from various sources on the web. I‘ve chosen ones where the ROT13 encryption produces some kind of English word.
- aha ↔ nun
- ant ↔ nag
- balk ↔ onyx
- bar ↔ one
- barf ↔ ones
- be ↔ or
- bin ↔ ova
- ebbs ↔ roof
- envy ↔ rail
- er ↔ re
- errs ↔ reef
- flap ↔ sync
- fur ↔ she
- gel ↔ try
- gnat ↔ tang
- irk ↔ vex
- clerk ↔ pyrex
- purely ↔ cheryl
- PNG ↔ cat
- SHA ↔ fun
- furby ↔ sheol
- terra ↔ green
- what ↔ Jung
- URL ↔ hey
- purpura ↔ Chechen
- shone ↔ FUBAR
- Ares ↔ Nerf
- abjurer ↔ nowhere
Write a ROT13 Encoder Decoder in Python
Since we’re programmers we want to automate everything. I don’t want to open Python every time I see something encoded in ROT13 and have to write a function. It would be great if we could apply ROT13 encryption/decryption from the command line!
Let’s create a script, rot13.py, to run whenever we find some text in ROT13. We want the final command to look like this
$ python rot13.py 'text to encode/decode'
So we need to
- Create the script rot13.py
- Pass command line arguments to our script
Thankfully, the built-in sys module lets us access command line arguments. The object sys.argv is a list containing all arguments passed to the script.
# sys_file.py import sys print(f'sys.argv is: {sys.argv}') for arg_index in range(len(sys.argv)): print(f'Argument #{arg_index} is: {sys.argv[arg_index]}')
Let’s run this from the command line and pass it some arguments
# Pass 3 arguments to sys_file.py $ python sys_file.py hello goodbye come_back sys.argv is: ['some_file.py', 'hello', 'goodbye', 'come_back!'] Argument #0 is: some_file.py Argument #1 is: hello Argument #2 is: goodbye Argument #3 is: come_back
The first element of
sys.argv is always the name of the script. The other elements are the arguments you passed in the order you passed them. When you access these in your Python script, it is the same as indexing starting from 1. You access the first argument with
sys.argv[1].
Note: arguments are separated by spaces. Thus come_back is one argument and come back is two.
Let’s apply this to the ROT13 functions we wrote earlier.
# rot13.py import sys from codecs import encode # Store 1st argument as a variable my_text = sys.argv[1] # Print out encoded version to the screen print(encode(my_text, 'rot_13'))
We will only pass one argument to this script: a string we want to encode. We store this as a variable
my_text and pass it to the encode function from the codecs module.
Save rot13.py in your Home directory. Now whenever you find some text in ROT13, you just need to open a terminal window and can decode it within seconds. Type the following into your terminal window now!
$ python rot13.py 'Lbh ner nznmvat!'
Where to Go From Here?
ROT13 is a simple encryption method. it shifts each character of a string, x, 13 positions forwards in the alphabet.
It offers no encryption, only obfuscation. However, it is a great way to obscure messages in online forums and private communications. ROT13 is a special variant of Caesar’s cipher where the function is its own inverse.
'a' >> (shift by 13 positions) >> 'n' >> (shift by 13 positions) >> 'a'
Do you want to improve your Python skills to the point where every software company would love to hire you because you belong to the top coders? Check out the Coffee Break Python book series! It’s a fun way of accelerating your Python coding skills in a very engaging manner. (And we just reached LeanPub bestseller status in the category Python!)
|
https://blog.finxter.com/how-to-use-rot13-in-python/
|
CC-MAIN-2020-34
|
refinedweb
| 1,949
| 67.65
|
Data Structure Schema Locations
Topics
The Amazon Mechanical Turk uses several XML data structures to help you define your tasks flexibly. These data structures are specified using schemas that are versioned. This allows MTurk to add new versions of task types while preserving backwards compatibility.
When constructing an XML object for any of these structures, you must declare a namespace that matches the target namespace for the schema for the structure. The namespace is defined using the URL to the schema definition. For example, here is how to declare your namespace when constructing an HTMLQuestion:
<HTMLQuestion xmlns="">
[...]</HTMLQuestion>
If the service returns an error message about data not validating against the schema, make sure your namespace declaration matches the target namespace specified in the schema.
Important
Beginning Tuesday, December 12th 2017 the QuestionForm structure will
no longer support the
FileUploadAnswer element and the
Application element. The 2017-11-06 version of the QuestionForm schema has been updated to reflect these
changes. If you don't use the deprecated elements in your QuestionForm, the 2005-10-01 schema will continue to work.
You can find the schema namespace values for all of the question and answer data structures below:
|
https://docs.aws.amazon.com/AWSMechTurk/latest/AWSMturkAPI/ApiReference_SchemaLocationArticle.html
|
CC-MAIN-2019-47
|
refinedweb
| 198
| 53.71
|
import "golang.org/x/tools/playground/socket"
Package socket implements an WebSocket-based playground backend. Clients connect to a websocket handler and send run/kill commands, and the server sends the output and exit status of the running processes. Multiple clients running multiple processes may be served concurrently. The wire format is JSON and is described by the Message type.
This will not run on App Engine as WebSockets are not supported there.
Environ provides an environment when a binary, such as the go tool, is invoked.
RunScripts specifies whether the socket handler should execute shell scripts (snippets that start with a shebang).
NewHandler returns a websocket server which checks the origin of requests.
type Message struct { Id string // client-provided unique id for the process Kind string // in: "run", "kill" out: "stdout", "stderr", "end" Body string Options *Options `json:",omitempty"` }
Message is the wire format for the websocket connection to the browser. It is used for both sending output messages and receiving commands, as distinguished by the Kind field.
Options specify additional message options.
Package socket imports 20 packages (graph) and is imported by 8 packages. Updated 2017-08-15. Refresh now. Tools for package owners.
|
http://godoc.org/golang.org/x/tools/playground/socket
|
CC-MAIN-2017-34
|
refinedweb
| 197
| 56.96
|
Content-type: text/html
dlopen, dlsym, dlclose, dlerror - interface to dynamic library loader
#include <stdio.h> #include <dlfcn.h>
void *dlopen(pathname, mode) char *pathname; int mode;
void *dlsym(handle, name) void *handle; char *name;
void dlclose(handle) void *handle;
char *dlerror(void)
The dlopen function provides an interface to the dynamic library loader to allow shared libraries to be loaded and called at run time. The dlopen function attempts to load pathname in the address space of the process, resolving symbols as appropriate. Any libraries that pathname depends upon are also loaded.
If pathname includes a /, dlopen will attempt to open it as specified. Otherwise, dlopen will attempt to locate pathname using shared library search directories in the order specified below (see loader(5) for more details on shared library search directories): The current directory The program's rpath directories LD_LIBRARY_PATH directories Default shared library directories
If mode is RTLD_LAZY, then the run-time loader does symbol resolution only as needed. Typically, this means that the first call to a function in the newly loaded library will cause the resolution of the address of that function to occur. If mode is RTLD_NOW, then the run-time loader must do all symbol binding during the dlopen call. The dlopen function returns a handle that is used by dlsym or dlclose call. If an error occurs, a NULL pointer is returned.
If a NULL pathname is specified, dlopen returns a handle for the main executable, which allows access to dynamic symbols in the running program.
The dlsym function returns the address of the symbol name found in the shared library corresponding to handle. If the symbol is not found, a NULL pointer is returned.
The dlclose function deallocates the address space for the library corresponding to handle. The results are undefined if any user function continues to call a symbol resolved in the address space of a library that has since been deallocated by dlclose.
The dlerror function returns a string describing the last error that occurred from a call to dlopen, dlclose, or dlsym.
The dlopen and dlclose routines might dynamically change the resolution of certain symbols referenced by a program or its shared library dependencies. The dlopen routine might resolve symbols that were previously unresolved, and dlclose might cause resolved symbols to become unresolved or to be reresolved to a different symbol definition.
Use of the dlsym routine is the preferred mechanism for retrieving symbol addresses. This routine reliably returns the current address of a symbol at any point in the program, while the dynamic symbol resolution described previously might not function as expected due to compiler optimizations. For example, the address of a symbol might be saved in a register prior to a dlopen call. The saved address might then be used after the dlopen call, even if the dlopen call changed the resolution of the symbol.
Dynamic symbol resolution functions reliably for programs compiled with the -O0 flag. Also, routines that do not call dlopen or dlclose, either directly or indirectly, can safely depend on dynamic symbol resolution.
The maximum number of shared libraries that can be loaded simultaneously by a single process is approximately 60. This limit can be raised by reconfiguring the kernel's vm-mapentries parameter. This parameter should be set to at least three times the desired maximum number of shared libraries that can be loaded by a process. See the manual System Administration for instructions on reconfiguring the vm-mapentries parameter.
ld(1), loader(5). delim off
|
http://backdrift.org/man/tru64/man3/dlopen.3.html
|
CC-MAIN-2016-50
|
refinedweb
| 585
| 51.58
|
Hello all,
Here we will explore another side of LINQ (Language-INtegrated Query). We all know that LINQ is to query the object (even to the System object). Using the System.Reflection namespace we can get the list of Methods, Fields etc using LINQ Query (with the filter and sort). Lets see one small example on that,
IEnumerable<Type> all = from t in typeof(SqlConnection).Assembly.GetTypes()
where t.Name == “SqlConnection”
orderby t.Name
select t;
foreach(var v in all)
{
textBox1.Text += “Start:Methods of “ + v.Name + “\r\n”;
textBox1.Text += “===================================\r\n”;
foreach(MemberInfo mi in v.GetMembers())
textBox1.Text += mi.Name + “\r\n”;
}
The output will display the list of methods from the SqlConnection class.
Namoskar
Join the conversationAdd Comment
Nice example
Thanks
Mahir M. Quluzade
|
https://blogs.msdn.microsoft.com/wriju/2007/02/16/linq-in-not-only-for-object-query/
|
CC-MAIN-2016-44
|
refinedweb
| 129
| 62.95
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.