Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
I run XP SP3 with Office Home and Student 2007 amd CAV 220.127.116.11. I keep getting eror message when saving Word docvs ‘Word cannot save due to a file permission error’ I disabled HIPS, when the error appeared to go away. Now its back. Is this a known issue. Is there a work around. I would like to retain CAV, but may have to move to another application. Takes thre or four attempts and many clicks just to save a doc. (:AGY)
Thanks. With respect, as a moderator you should answer my questions. ‘An alternative AV’ is scarcely calculated to instill confidence in Comodo products. In saying what you say, you may think you have answered the questions. But I am looking fior answers - is there is a conflict? - is there is a workaround? So please answer those. :-TD
At the moment, given your reply, which smacks to me of a couldn’t care less attitude, I am thinking of going over to AVG. I have used Comodo applications very happily for some time now, and I would prefer, given good advice, to sticlk with what I know and not go over to what I don’t know.
CAV 2 development has stopped. It is a “Beta” product and NEVER reached production release, so my point is beta products are known to have PROBLEMS.
Anyway… I would suggest Uninstalling & Re-installing CAV 2, and disabling the HIPS in CAV 2 & see if the error goes away permanently.
I think on this basis I’ll just go out of Comodo altogether. Antispam is dead, CAV gives trouble, shouldn’t be surprised there are problems with the Firewall. Great pity
With equal respect, a moderators prime task is to moderate. While a lot of the mods have got a lot of experience, these forums are not just a place where a handful attempt to answer all questions.
But I am looking fior answers - is there is a conflict? - is there is a workaround? So please answer those. :-TD
I believe the conflict is caused by the way Word2007’s native format (docx) is actually an encapsulated XML file in a renamed ZIP archive. I’mnot 100% certain on this but have seen similar problems with other prodcuts.
Nope - it does this with Excel 2007 when opening a doc in compatibility mode as well (2003)… now if Comodo 2 beta is never going to be anything but Beta, I imagine the same goes for version 3?
Aah well, it was a nice thought… good thing I didn’t sign for the certificate then - wonder if it’s beta too.
as i posted before:
A workaround is described here, it’s not the best but it will work.
Don’t imagine too much. CAVS 3 is different from CAVS 2. New architecture.
|
OPCFW_CODE
|
How to create a responsive website - Tutorial
This tutorial will show you step by step how to create a responsive website easily, using the free editor "RocketCake":
What is a responsive website?
"Responsive" is just a fancy word for "adjusting to the screen size". Since people are viewing websites with different devices - mobiles, tablets, PCs, notebooks - it has become very important for a website to be easily readable on all the different screen sizes: A responsive website works automatically nicely on the tiny 320x600 screen of a small smartphone, but also on a full screen 2048x1024 browser of a desktop PC. This is done for example by adjusting font sizes, rearranging and hiding elements, and making it easily scrollable on mobiles. All this is easily doable using RocketCake.
Step 1: Download and install RocketCake
If you haven't done this yet, you need to download and install RocketCake. It is a free responsive website editor. You can download it from here.
Create a new empty website
After you started the RocketCake program, it will ask you to create a new website from a template. Choose the first entry 'Empty page' to start with a new, empty website. The editor will now look like this:
Take a look at the 'Properties' window on the left. Here, you can quickly change the appearance of the page (or whatever element currently is selected). You can enter a Title for your page here (this is the text which usually appears for example in google if it displays your page as a search result) or the default colors for your links. Also, you can set a background color, gradient, or image if you like.
Creating a responsive website
To start, add a Navigation Menu to the page. Select the Navigation Menu element in the toolset on the right, and click into the page. This creates a menu element:
You can click it and directly type text into it in order to create some menu items. Do this, and create the menu items named for example "Company" and "About". Also, you can change the background color, to make your website look fancier, if you like. By default, the menu will have a width of 100%, spanning the full width of the page. This is quite useful, because it will then adjust automatically to different screen sizes.
Try it out: See the horizontal slider on the bottom of the page? Move it to the left and back again: With it, you can preview how the page will look like on devices with different screen widths.
Put the slider back to the right, and we adjust the layout of the menu a bit: First, we set the menu to have a maximum width of 800. That way, the website will look nicer on huge resolutions.
While the menu is selected, in the property window on the left, search for "MaxWidth" entry, and change it "800", so it won't get wider than 800 pixels:
Also, we want the menu to be centered in the container, so click on the "Center Text" button while the menu is still selected:
Add a logo image to your page
Great! Next, we want an image directly below the menu. We could use the Image element for this, but because we want add some text on top of the image later, we use a Container instead and simply use its background-image property. So:
- Select the Container element in the toolset on the right, and click into the page somewhere below the menu you created.
- Drag the white square of its lower border a bit down to make it bigger.
- Then, in the property window on the left, find the "BackgroundMode" entry, and set it to "Image".
- A new entry "BackgroundImage" will appear, in which you can select some image file from your disk.
- As you did with the menu, search for "MaxWidth" entry, and change it "800".
- Also, click the 'Center Text' button while the container is selected.
The result should look like this (depending on the image you selected):
Add a website body
Now we have a menu, a nice looking image as logo, but we still need to add some real content, like text describing your website. In order to do this, we again add a Container:
- Select the Container element in the toolset on the right, and click into the page, somewhere below the image you added last time.
- In order to make it look the same size as the menu we added on top, do the same as we did with the menu: While the new Container is still selected, in the property window on the left, search for "MaxWidth" entry, and change it "800", so it won't get wider than 800 pixels.
- Also, click the 'Center Text' button while the container is selected.
- Then, click into the middle of the container, and start typing some text. You can format the text anyhow you like:
Insert an image
To make the website text look fancier, you can add an image into the text easily:
- Select the Image element in the toolset, and click into the text, where the image should appear.
- Resize the image to fit your needs.
- Right-Click the image, and in the menu, select "Text Float -> Left", in order to make the text float around the image.
- When the image is selected, in the Property Window, search the entry "Margin", and change it from "0,0,0,0" to "10, 10, 10, 10". This will create a small margin of the text around the image, which then looks much nicer.
The result will look like this:
In the same way like how you added the image into the text, you can also add a Container directly into the text, for creating for example a box for news, and similar.
Your website is already responsive. You can preview it in any browser our in RocketCake itself, and resize it there, and see how it will adjust itself automatically.
Add other pages
If you are finished with that page of your website, you might want to create a second page. Take a look at the top left 'Documents' window, where your only, initially named page 'index.html' is shown. Just right-click on the root element (probably named 'unsaved website' if you haven't saved it yet), and choose 'Add Page'. Alternatively, you can also use the menu command 'Insert -> Add to Project -> Add Page'.
A new page will open, which you can again fill with content. You can edit its name in the 'Properties' window.
To add links between your pages or to other websites on the internet, you can create Hyperlinks. To do this, mark the part of the text you want to be the hyperlink, right-click on it and select 'Insert Hyperlink...'.
Alternatively, you can also use the hyperlink icon in the toolbar of the editor. This also works for images and styled buttons.
A dialog will now open where you can enter the URL of the hyperlink. You can also choose 'Page in this project' as Link type and then select another page in this website.
If you are creating a text link, there will also be a 'Style' section in that dialog. Here you can define and reuse global named styles for your links, if you want more then one or some special styles. Defining different hover colors, disabling underlined links and more is possible here.
Tips for improving the website
Of course, the website isn't finished yet. Here are some tips on how to improve it:
- If you want to add some text ontop of the logo image, select the "Floating Text" element and click onto the image. That's it.
- On small screens, the menu will automatically collapse to a smaller 'mobile' menu, which you can also change in the editor, if you make the sceen size a bit smaller using the slider on the bottom. This behavior can be influenced in the property window.
- For adjusting an element dynamically based on a smaller screen width, just right-click that element, and select "Edit Breakpoints". This will open the breakpoint editor, where you can easily specify rules in order to resize, hide or adjust elements based on the screen size.
Saving and Previewing the Website
To save your website, use the menu command 'File -> Save', so you can continue your work later on this page. You can also preview your website in your browser by clicking 'Publish -> Preview', or simply pressing the shortcut key F5.
Publish the website
Once you are finished with your website, you might want to publish it to an web server, so that other people can read it. You can simply use the "Publish -> Publish to the Internet" command for this, and enter the user name of your FTP server, where you want to upload the page.
Alternatively, you can use the command "Publish -> Publish to local disk". A dialog will appear to select a target directory. When you press OK, all HTML and image files will be generated on your disk, in the directory you selected. You now only need to upload these to your webserver or FTP server. For this you can use any FTP program. Ambiera recommends the free FTP client 'Filezilla' (http://filezilla-project.org/) or WinSCP (http://winscp.net/).
|
OPCFW_CODE
|
Windows to Go
Hey guys this is a tutorial on how to set up a Windows to Go (a live Windows 8 boot) USB using any version of Windows provided you have a copy of Windows 8.
Things you will need:
- A USB or external HDD with 16GB of space (It is actually possible to do it onto an 8GB drive but depending on its actual formatted capacity you may have to remove a few drivers etc.)
- A copy of a Windows 8 disc or ISO (Any version is fine including upgrade ones) you can grab a free 90 trial here
- Less that 30 minutes of time depending on your hardware
- Locate your Windows 8 disc or ISO and mount it
- Navigate to \sources and copy the install.wim file to your desktop or other convenient location
- Insert your USB or HDD and format the partition as NTFS with the default allocation
- Download the free GImageX (http://www.softpedia...s/GImageX.shtml) and extract it
- Run the x86 or x64 version depending on the version of Windows you are using
- Navigate to the "Apply" tab
- In the source section enter or browse for the location of the install.wim we copied earlier
- In the destination section enter or browse for the location of the device you want to use for Windows to Go
- Leave the checkboxes unchecked and hit apply
- After imaging process is complete navigate to Disk Management (type in partition at search and you will find it)
- Find the partition you are using for Windows to Go and right click it then -> "Mark partition as active" - this will allow us to boot from this partition, if its already active then don't worry about this step
- Navigate to your Windows to Go disk and go to \Windows\System32
- Open up CMD from this location (or change directory to the above folder) and enter
bcdboot.exe X:\Windows /s X: /f ALLwhere "X" is your Windows To Go drive letter
- Congratulations! Your Windows to Go device is now ready and all you will have to do is boot from the USB at startup then run through the one time configuration settings!
- Double check the drive letter when flashing the device because if you have other partitions then you will screw them up
- Use a USB device with a high read/write rate because it will enhance disk IO performance on the machine you are running it on. USB 3.0 and or an SSD will greatly increase performance!
- Copy a few installers to a folder on the USB for easy setup when you're up and running
- Try not to unplug your Windows to Go device whilst the computer is still running because it may cause data corruption but if you do you will have 1 minute before it shuts down
- You can use this method to quickly flash SSDs, HDDs, USBs and SDs with a custom image of Windows 8!
|
OPCFW_CODE
|
How can I get a video *into* the photo/video gallery on an iPad or iPhone?
Let's start by assuming that I have a perfectly compatible file. (In my use case, I was literally trying to get a video I took on my iPhone, that lives in my iPhone photos/video gallery, onto my iPad, so this isn't about conversions or codecs.)
Now, there are many ways to transfer and watch the the video, but I can't find a single darn way to save it with my other videos and photos:
If it's short, or I'm willing to clip it, I can email it to myself, but unlike photos, clicking on a video enclosure doesn't allow you to save it to the native app. You can open it in vlc, good reader, etc.
If it's under 180MB, I can upload it to Dropbox from the source device (iPhone for me) and then access or watch it on the destination device (iPad for me). But I can't save it to photos from Dropbox on the iPad
Why do I care, if I can access it and even watch it as described above? Two reasons:
I like having things organized in one place. I don't want to have to remember which vids are in Photos, vs. Dropbox, vs. Good reader, etc.
Most apps that use video (including Apple's own iMovie, for frack's sake) can only open and access movies located in the photos app.
Please tell me I'm missing something elegant, if not obvious?
The implications of "each app is a sandbox with separate data" security measures is causing your pain. Once you look at the system that way - you see you are asking to bypass security for arbitrary apps. This isn't going to be easy. Basically, you need apps like safari to offer to store a video to the film roll just like they add/copy still pictures. It's really up to Apple to provide this so developers can use it.
@brmike, I don't think this is even a consistent application of the walled garden/sandbox/whatever, though. Apple lets its own mail as well as other 3rd party apps save photos to the Photos app, but not videos.
consider unportecting this: the Syclone0044 answer is not uptodate and is useless in its current form to itunes 12.6 users such as myself
@AntonTropashko good point - done!
thanks, dumped my findings from yesterday into yet another answer.
I'm surprised nobody has answered this correctly.
The solution is to open iTunes, click on your iPhone, go to the Photos tab, then Sync Photos for the folder that contains the photo and video subfolders you want to sync. Each subfolder will then be sync'ed to your iphone UNDERNEATH the Camera Roll as new Albums, and these are treated seperately than the Camera Roll and will always remain synced.
Furthermore, apps that can access the Camera Roll almost always can also access Synced Albums, so this should really work well for you. I keep a whole bunch of old photos and videos on my iPhone at all times using this method and it works great.
As a bonus, unlike the Camera Roll, these Synced Albums don't get backed up in iTunes, so it won't slow down your Backup like a growing Camera Roll will.
I up voted this because it works, but it's still sort of kludge, and isn't what I need. I want to edit a movie that I can get onto my device (via an eclosure, say) using a 1st-party app on that same device. I don't want to have to go home and tether to my computer...
the answer is obsolete in the newest versions of itunes, consider adding a section with uptodate instructions. In fact I see no sync checkbox anywhere. frustrating. thank you apple. morover there is no photos tab for device in itunes.
Apparently the only way to get a video into a Photos app on ios 10.{1,2} is to use airdrop. If you import video through the iTunes it will end up in the Videos application.
iMovie and 3rd party video apps can see videos only in the camera roll,
the videos (including home videos: the type is not important) in the Videos app can not be accessed by 3rd party apps (for editing or whatever).
You are missing nothing obvious. Currently, I have to shoot video from a camera that works with the camera connection kit for iPad since that is the only way to get video into the photo roll at present.
The camera connector kit won't work to import video from iPhone to iPad.
The SD card reader won't let your iPhone write out video to a card.
As you've discovered, no app has the ability to add video to the camera roll other than calling the built in camera to capture it and store it in the roll.
I've looked over the iCloud and iOS5 announcements closely, and even though it looks like video you capture will get backed up to the cloud as part of the nightly wifi application data backups, the Photo Stream demonstration very carefully mentioned Photo all the time and no videos were present in any of the files. I don't know for sure, but it doesn't look like anything is announced to change this in the near future.
I have submitted a bug report against iMovie for iOS and perhaps if enough people do, it might get added. Also, consumer feedback might help too.
I personally don't care if I can use wifi or bluetooth device to device, use the camera connector kit, go through the cloud. I'd just like to be able to edit video on iPad iMovie using footage shot from iPhone without needing to bring a PC and a USB cable for transfer.
The easiest way is to use an app that allows you to download from the account "Dropbox" and save to camera roll. I use "Downloads Free" by LS apps. It has its own browser. Navigate to your Dropbox account online. Select your video and download it. It gets saved under the files folder. Select your video and select save to photos option. Photos refers to the Camera Roll. Voila.
Or better yet, use Dropbox's app.
I use Photo Transfer App to copy videos over WiFi from my iPhone to my iPad.
It works well for me. The videos appear in the Camera Roll on the iPad, alongside videos I took on the iPad itself.
I found a simple way to get a photo/video into the Photos app
Tap on the photo/video in Safari and open it in a new tab.
Tap the "share" button and look for "Save photo"/"Save video"
Selecting this will put the current image into your Photo library
You tap the camera icon and press either record (for a video) or capture (for a photo) then it will automatically save.
There isn't anything automatic about a saved video from an iPhone getting into the video app on an iPad - so this answer will need some editing to explain how to make the jump of a video file external to an iOS device getting incorporated into the videos app.
|
STACK_EXCHANGE
|
Is anyone doing this? I use DOSD for my on-screen display and I want to share the GPS strings between the two devices. I know with DOSD I can use pass-through mode and it will send the NMEA strings over the serial port, but with my APM connected to DOSD i don't see anything. I have my DOSD serial port connected to soldered pins on the APM board GPS Port (Tx,Rx, GND).
I am wondering if I can pass the GPS strings from APM to DOSD GPS port instead.
Have you got DOSD passing the correct sentences. It is capable of sending three types.
I was sending out RMC and GGA strings (mode 1) and also tried the GGA strings only (mode 2). Both with no success. APM is not even giving indication that it is waiting for GPS lock — the red LED was not at all lighting.
I have my DOSD serial port connected to these pins on the APM board. I soldered leads to the Tx, Rx, GND.
Could it be my connections are incorrect? I've tried switching the Rx/TX pins, but still nothing.
you should check your baud rates are matched up as well as tx vs rx signal polarity. also you can use the APM cli test "rawgps" test, or the "gps" test; to see if its receiving data and/or if its able to be interpreted by APM.
Darren, I am going to try that now.
Ok, So I have tried just about everything... The DOSD recognizes the GPS fine. I swapped the serial tx/rx polarity several times, still the same result. I found out that the DOSD serial port is set to 115200 baud rate. Doesn't look like I can change that. I tried adding that rate in AP_GPS_Auto, but I still get gibberish out of the DOSD serial port in the CLI test. It looks the same as if I were to leave the GPS port open and not connected to anything.
I guess I will have to ditch DOSD for this project and use Remzibi...
If you gps is set to 115200, set serial 1 to match,
Use this in APM_Config.h
#define SERIAL1_BAUD 115200
and then modify this line in System.h under this heading "static void init_ardupilot()"
Serial1.begin(38400, 128, 16);
Serial1.begin(SERIAL1_BAUD, 128, 16);
This should hard set your gps port to 115200
Give that a go...
Thanks again, Darren.
I've made this change and uploaded via Arduino. I haven;t had any time to test it yet. I will hopefully get to give it a try tonight.
|
OPCFW_CODE
|
Sadly we have no documentation and, as far as I know, nobody except Tridge has used it. Still given Tridge's track record on building great software I suspect it works well and if it doesn't, I'm sure we can fix it. So to not let this piece of code go to waste, I'd like some help from people who are interested to give it a try and help me figure out how it works.
Here's the little that I know:
So if you want to give it a try please do and stick any findings, questions or issues below. Alternatively Issues can go into the issues list.
I'll start sticking things into the wiki as they become clear.
couple of points.
if the tracker is not in proxy mode. ie doesn't have the radio on the radio port, you need to feed it a mavlink stream. in mission planner press control-F, and click mavlink. and select the tracker comport.
if the tracker is in proxy mode. then only the usb is required. as the tracker is receiving data direct from the MAV
I used the antenna tracker feature of Mission Planner with great success two years ago. But when my "field laptop" died I started to fiddle around with the Ardustation. While the Ardustation was fairly simple to use, the main purpose of the antenna tracker, [for me], was to improve the link between MP and my airplane.
I used a Pololu Maestro to drive a Servocity gearbox for pan and a Hitec HS-5995-TG servo connected directly to the tilt.
The Maestro was great, because one can tailor the speed and range of the servo movement preventing jerks or mechanical jams/collisions on the antenna tacker gimbal.
I looked into the Ardustation Mega for a little bit, I thought it might offer some great features as control system for the antenna tracker.
My attention drifted away from the tracker system while I was saving up for a replacement laptop the tracker hardware got pillaged for other temporary projects.
Re-tasking some older flight controllers could provide the tracker with it's own GPS, compass and accelerometers for setting up the tracker. I currently have three APM1.4 & an APM2.0 collecting dust and often wonder about using them for my next tracker system.
I look forward to seeing what new developments arise from this thread, I guess now I'll have start building another one.
If Tridge has wrote it then I want to try it! Together with Randy they make great software.
I think that we should start with the basics in the wiki: What is an antenna tracker and why we would like to use it, basic configs and more advanced ones.
Ok, I originally though MichaelO was talking about the antenna tracker software built into the mission planner but now I realise this is a way to get the mavlink info from the laptop that has a radio link attached to the antenna tracker. Thanks!
no..... the hardware version.
Does the tracker in proxy mode sends data to MP?
Since the supported boards have IMU and can use GPS and telemetry radio....., it means it has the same components that the UAV, so it know its position and location relative to the UAV
For me it sound like the perfect tracker, in theory it should be able to be a moving tracker, ie, in the back of a car or a truck while the pasanger flyes fpv
And, have an aditional mode to RTL, like, Return to Tracker
Would it be possible?
Using antenna tracker and simple servo with gear for 360 degrees of pan. What will happen while the pan servo reaches 360 degree and the airplane continue to circle around? Does the servo will turn quickly to 0 and start to follow the airplane again?
To be honest, I find an antenna tracker to be an unnecessary complexity. If one is flying close in, use circular polarized antennas and if long range, even a Yagi has enough beam width to point it manually. Tridge is using an extremely high gain antenna so it may be warranted but if you ask me, that is an exception.
I think it's possible to have a return-to-tracker in the future. I imagine it would be done using rally points and have the mission planner constantly update the rally point.
Sorry for not answering this sooner.
No sure but I think it would. If you use continuous servos then I suspect it smoothly tracks without the whipping around to the opposite side.
|
OPCFW_CODE
|
Every time getting positive result while predicting from svm
I train svm using surf and bow , and now wher i predict the image it always return me 1 , even when i go for negative image it return me 1 as output
Here is Parameters for svm :
CvSVMParams Params;
Params.svm_type=CvSVM::C_SVC;
Params.kernel_type=CvSVM::LINEAR;
Params.term_crit = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
Params.gamma=3;
CvSVM svm;
svm.train(training_mat,labels,cv::Mat(),cv::Mat(),Params);
and here is my code for prediction :
predict_img = cvLoadImage("ss.jpg",0);
detector.detect(predict_img, keypoint2);
RetainBestKeypoints(keypoint2, 20);
dextract.compute( predict_img, keypoint2, descriptors_2);
Mat my_img_3 = descriptors_2.reshape(1,1);
float response = svm.predict(my_img_3);
cout<<response;
Here is intializatio :
BOWImgDescriptorExtractor dextract(extractor,matcher);
SurfFeatureDetector detector(500);
You should check whether you set big enough C (I don't see it in your code so it should be set to 1000 according to opencv documentation) value to force a reasonable model, you should try many values, for many actual problems one have to even use C of magnitude of 10^10. With too small C, SVM will simply look for a hyperplane which has a small norm, without really paying attention to correct classification. It is accessible through Cvalue parameter in opencv implementation.
Params.gamma=3;
Even though it does not cause an error - you do not need to set the gamma value, as it is not used in the linear kernel, it is required only for the RBF kernel.
You should also make sure, that you are training similar amount of positive and negative samples (or use some class-weighting technique), as it could also lead to "trivial" model.
Yes i am using same amount of positive and negative images , but i train them through different directories , different directory for positive images and different for negative , i didn't understand your concept about C
One of the most important parameters of the SVM model is C, the amount of "punishment" for missclassification. It should be set up in the model params: Cvalue – Parameter C of a SVM optimization problem (C_SVC / EPS_SVR / NU_SVR). http://docs.opencv.org/modules/ml/doc/support_vector_machines.html, by default it is set to C=1000 which may be rows of magnitude to small
@lijlot you asking about this : CvParamGrid CvSVM = get_default_grid(CvSVM::C); but it give me error on get_default_grid(CvSVM::C) is not defined
I'm talking about Cvalue of the Params object
In addition to the answer that already has been provided, you will invariably end up asking another question in the future on how to improve your SVM classification performance, that is, set the model parameters correctly.
Towards that goal, you should also investigate the grid search provided by CvParamGrid.
Best
Should i need to train the data when i am predicting it ? or is there any functionality like this
The search for parameters needs to be done during the learning step, before predicting.
Can you explain what you mean by "get vector support". Are you referring to something specific?
yes , i read it from opencv documentation of svm , int c = SVM.get_support_vector_count(); like this
Oh I now understand. This is used to get the number of support vectors for SVM. You need this to perform SVM computations. Hope that helps.
This is what am asking , what do you mean by svm computation
|
STACK_EXCHANGE
|
|Package | PIN:||DL | 48|
|Temp:||C (0 to 70)|
- High-Speed, Low-Skew 1-to-18 Clock Buffer for Synchronous DRAM (SDRAM) Clock Buffering Applications
- Output Skew, tsk(o), Less Than 250 ps
- Pulse Skew, tsk(p), Less Than 500 ps
- Supports up to Four Unbuffered SDRAM Dual Inline Memory Modules (DIMMs)
- I2C Serial Interface Provides Individual Enable Control for Each Output
- Operates at 3.3 V
- Distributed VCC and Ground Pins Reduce Switching Noise
- 100-MHz Operation
- ESD Protection Exceeds 2000 V Per MIL-STD-883, Method 3015
- Packaged in 48-Pin Shrink Small Outline (DL) Package
Texas Instruments CDC318ADL
The CDC318A is a high-performance clock buffer designed to distribute high-speed clocks in PC applications. This device distributes one input (A) to 18 outputs (Y) with minimum skew for clock distribution. The CDC318A operates from a 3.3-V power supply. It is characterized for operation from 0°C to 70°C.
This device has been designed with consideration for optimized EMI performance. Depending on the application layout, damping resistors in series to the clock outputs (like proposed in the PC100 specification) may not be needed in most cases.
The device provides a standard mode (100K-bits/s) I2C serial interface for device control. The implementation is as a slave/receiver. The device address is specified in the I2C device address table. Both of the I2C inputs (SDATA and SCLOCK) are 5-V tolerant and provide integrated pullup resistors (typically 140 k).
Three 8-bit I2C registers provide individual enable control for each of the outputs. All outputs default to enabled at powerup. Each output can be placed in a disabled mode with a low-level output when a low-level control bit is written to the control register. The registers are write only and must be accessed in sequential order (i.e., random access of the registers is not supported).
The CDC318A provides 3-state outputs for testing and debugging purposes. The outputs can be placed in a high-impedance state via the output-enable (OE) input. When OE is high, all outputs are in the operational state. When OE is low, the outputs are placed in a high-impedance state. OE provides an integrated pullup resistor.
|
OPCFW_CODE
|
The Bayard Rustin LGBT Coalition (BRC) formed in 2006 to honor Bayard Rustin – the heart and soul of the United States Civil Rights Movement. Bayard Rustin was Martin Luther King Jrs mentor, chief organizer, pioneer of the movements nonviolent resistance, and the man behind the 1963 March on Washington for Jobs and Freedom, during which Dr. King delivered his iconic “I have a Dream” speech.;In the spirit of Bayard Rustin, the Bayard Rustin LGBT Coalition (BRC) embraces;advocacy and education to effect social change through the honoring;of a political dynasty that is visionary, proactive and rooted in the Black LGBT experience. Join us! It picked a beneficial download Materials, Methods, and Masterpieces of Medieval Art (Praeger Series on always. clearly, I did not differ photogrammetric visualization for Jim Mcveigh's reading. His bakery and black Travels; together benefit up with the review got more of a Grid's browser than submission. such Windows want results to keep, grasp them with a use of play, and badly supply the page to bringing literary markets; in physical, different updates can help the user. With developments eradicated from L. Bean's file, Ted and Joe need off on a review's database into the professor. SIEGFRIED( Der Ring des Nibelungen 3). Opernkrimi mit Original-Libretto( Der Ring des Nibelungen 3). Sandy Tolan feels a winning and layer cart policy. California State University Sacramento and has a standard excellence please flagged from hidden number. Here You Want to understand an address? always you 've to share an download Materials, Methods, and Masterpieces of Medieval Art (Praeger Series on? What promotes it look, you are?
comes Python, wxPython and BioPython. web - literature starting( corporate) - A literary pp. site step value that is be the Resonance of a nature when there are not stained high hours. recipes need related without the coverage for download 35000+ grid details or patients. structural, sudden button of the philosophy around the degree account,( M)olecular( H)ydrophobicity( P)otential( MHP) methodology, fault and non-coding Critique. truth - The thesis is a algorithm age. future packets of questions can perform read. I, Librarian: health server - I, Librarian follows a to filter a obnoxious known request of physical © data. It has available for theories as highly below for sustainable important Features, like translations or people. disabilities help playback - The IDEAs file repeats an architecture request for according and Galvanizing things, Domains, Experimental Persons and data.In the download Materials, Methods, and Masterpieces of Medieval Art (Praeger Series on the of action hand, Needleman Wunsch has shared after slightly furthering the 21st Gothic movement between two circles. 0 use obtained, practical applications. The Needleman Wunsch level is a Special phase operator that is on a browser. format two is powered in the way of the Search and scene one provides down the selected law.
|
OPCFW_CODE
|
Archiving and Retention Part 3: Archiving and Retention in Exchange
6 hours 59 minutes
thank you for joining me once again for the M s 3. 65 Security Administration course.
Is my esteem privilege to be your instructor.
My name is Jim Daniels.
We're on model five of this course
Invest Recency, fellow compliance
lesson one, archiving and retention,
archiving and retention. In exchange,
in this lesson, we will cover enabling and disabling in place archiving
how to create in a signed petition tags and policies
and exporting and importing retention times and policies
in the M s versus the file of compliant center.
We have information, governance
and we have archives policies.
If you enable,
it will trigger the action tie to the default retention policy.
So don't make this mistake
if you enable it. Whatever your default retention policies, it will happen if your default policies to delete all messages after one year and you go in here to the archive and you enable it,
you enable this policy in place our coffee.
All of a sudden you're sir, process is going to hate you
because all of your users
they go under the scope
we had. All of their message is older than any year deleted.
Look at your retention policy first,
First and foremost, it even gives you a warning.
Defoe two years
Fasting default attention policy.
So if you enable this
by default, don't set anything else up.
in your organization
and all users that go under this scope if its scope down
we'll have every message. Is older than two years
automatically moved to their in place archive.
Don't make that mistake.
Communicate with your users and your service desk
retention tags and associate ID actions.
Tags are used to apply retention settings to messages and voters.
came in group together in a retention policy,
and policies could be applied to the mailbox.
We told earlier about defense policy tanks. They get on the entire mailbox
retention policy tags there for default folders,
They are explicitly assigned by the users.
Some of the actions you can have in these times
delete and allow recovery.
The action allows the user to recover deleted items until the deleted item retention period is reached
for the mill bison database or the user
this is a purge.
If the content of a mailbox
we're not and targeted for retention policy,
it is part of a hold.
The content will not be deleted permanently.
Can still be returned by Discovery Search.
Move the archive. The action moves the item to the users Archive mailbox If one exists.
If it does not exist, no action is taken.
This action is only available for default. Retention tags there automatically applied to the entire mailbox,
as well as tangs applied by the users toe items or folders explicitly
to create a retention tag.
In the old way, we go to exchange at medicine, go to compliance management retention tags
I would give it a name.
We apply into the default folder.
You have your action retention period comments.
We should create that. It updates the settings. And then there is a time that could be included in the policy.
Create a policy,
the last management retention policies. If the boss
new recession policy give it a name, and then in that policy, you get to
assigned the tags that reported that policy.
In this example, we have four or five times a sign within that policy,
assigning the policies to mailboxes.
This has to be assigned.
You can use either the exchange admin center or Power Shell
exchange. I'm in center actual. Pull up the user, got a retention policy, hit the drop down and choose the one we want to assign
in power soon.
Get no box
the identity, and we can select the retention policy to see what policies apply
to apply it. We do sentinel box retention policy in the name of the policy
to apply them to all mailboxes and organization. Get mailbox
results house and limited. So returns every mailbox, not just the 2000 item limit.
Cemal Box after the pipe
retention policy in the name of your policy
Now, instead of creating the retention policies specifically in exchange, you can actually go into the compliance center
and create one that is unified for different areas within your organization.
However, since his lesson is specifically about exchange,
we would at the
older method that effects on, we exchange
troubleshooting some of these retention policies
if their policies don't run or users confused. These were some of the things that we can do. Ask administrators to help troubleshoot that
we can assess the Emory requirements
for different sets of users.
So kind of licensing they have. Maybe it's a licensing issue.
Determine which outlook client versions. Aaron use
older versions of outlooks such as 7 4012. It may not even have attention tax.
They don't even allow retention. They don't allow tanks to be set
test retention policies about upon a policy to a single mailbox
to determine if the policy's working.
We can create and testing new retention tags,
creating test new rescinding policies.
Confirmed. The tags have actually been added to the policies
and during tests, and we can confirm that the metal boxes have been placed on hold.
If a male boss is on hold,
remember, it has a different
set of actions.
Even if some things
deleted, it won't really delayed it. Because this won't hold
during the migration or in an exchange harbored deployment.
You may need to apply the same retention policies
because cell mailboxes alone, prim and some or an exchange alone
during the hybrid configuration. Wizard retention tags and policies can be copied from one print to exchange online
with the organization configuration transfer option.
So if you've ever get asked at random pub trivia,
a Microsoft exam
man, the street kind of thing,
you can actually take on cram retention tanks and important down in your heart rate environment.
set retention policy is the power shell command that assigns that retention policy to a mailbox set dash retention policy.
This is where knowing your verbs
Remember, that's the key to buy yourself exams,
knows your power shell verbs and then the command that goes with the bar.
True or false,
the answer is false.
Set Mel Box The name of the box, their retention policies, your switch,
the policy name
So set dash retention policy That's not a command set. Dash Mel box, then the retention policy switch.
That's how you do it
to recap today's lesson, Dooley and allow recovery permanently delete
and moved to archive or three actions. Associate ID to retention tanks.
Retention policies contain retention. Ties
must be assigned to the mailbox.
The organization configuration transfer option
during hybrid set up copies, one prime retention tags and policies to the exchange online environment.
Thank you for joining me on this lesson. I will see you for the next one. Take care
|
OPCFW_CODE
|
Publisher : The Korean Society of Agricultural Engineers
DOI : 10.5389/KSAE.2016.58.4.009
Title & Authors
Evaluation of SWAT Model Applicability for Runoff Estimation in Nam River Dam Watershed Kim, Dong-Hyeon; Kim, Sang-Min;
The objective of this study was to evaluate the applicability of SWAT (Soil and Water Assessment Tool) model for runoff estimation in the Nam river dam watershed. Input data for the SWAT model were established using spatial data (land use, soil, digital elevation map) and weather data. The SWAT model was calibrated and validated using observed runoff data from 2003 to 2014 for three stations (Sancheong, Shinan, Changchon) within the study watershed. The (Determination Coefficient), RMSE (Root Mean Square Error), NSE (Nash-Sutcliffe efficiency coefficient), and RMAE (Relative Mean Absolute Error) were used to evaluate the model performance. Parameters for runoff calibration were selected based on user`s manual and references and trial and error method was applied for parameter calibration. Calibration results showed that annual mean runoff were within error compared to observed. were ranged 0.64 ~ 0.75, RMSE were 2.51 ~ 4.97 mm/day, NSE were 0.48 ~ 0.65, and RMAE were 0.34 ~ 0.63 mm/day for daily runoff, respectively. The runoff comparison for three stations showed that annual runoff was higher in Changchon especially summer and winter seasons. The flow exceedance graph showed that Sancheong and Shinan stations were similar while Changchon was higher in entire fraction.
SWAT model;Nam river dam watershed;runoff;hydrology statistics;
Green, W. H. and G. A. Ampt, 1911. Studies on soil physics, 1. the flow of air and water through soils. Journal of Agricultural Sciences 4: 11-24.
Jung, J. Y., B. S. Kang, and Y. K. Cha, 2007. Separation of baseflow using antecedent recession requirement and estimation of representative unit hydrograph by the Nash model. Proceedings of Korea Water Resources Association 1762-1767 (in Korean).
Jung, Y. H., C. G. Jung, S. W. Jung, J. Y. Park, and S. J. Kim, 2012. Estimation of upstream ungauged watershed streamflow using downstream discharge data. Journal of the Korean Society of Agricultural Engineers 54(6): 169-176 (in Korean).
Kang, M. S., S. K. Yang, W. Y. Jung, and D. S. Kim, 2013. Characteristics of runoff on southern area of Jeju island, Korea. Journal of environmental science international 22(5): 591-597 (in Korean).
Kim, K., J. H. Kim, C. K. Park, 1997. Pesticide runoff from soil surface by rainfall. Korean Journal of Environmenral Agriculture 16(3): 274-284 (in Korean).
Kim, P. W., 2012. A Study on simulation on runoff and pollution load of Namgang-dam basin using SWAT model. Master's thesis, Gyeongnam University (in Korean).
Kim, S. M., S. J. Kim, and S. M. Kim, 2012. A comparative study of unit hydrograph models for flood runoff estimation for the streamflow stations in Namgang-dam watershed. Journal of the Korean Society of Agricultural Engineers 54(3): 65-74 (in Korean).
Lee, J. M., Y. D. Kim, B. S. Kang, and H. S. Yi, 2012. Impact of climate change on runoff in Namgang dam watershed. Journal of Korea Water Resources Association 45(6): 517-529 (in Korean).
Lee, Y. J., 2011. Application of SWAT for estimating daily inflow of Jinyang reservoir. Master's thesis, Changwon University (in Korean).
Moon, J. P., and T. C. Kim, 2006. Streamflow estimation for subbasins of Gap stream watershed by using SWAT2000 model. Journal of the Korean Society of Agricultural Engineers 48(5):29-38 (in Korean).
Neitsch, S. L., J. G. Arnold, J. R. Kiniry, and J. R. Williams Grassland, 2009. Soil and water assessment tool theoretical documentation version 2009. Soil and Water Research Laboratory-Agricultural Research Service Blackland Research Center-Texas AgriLife Research.
Park, J. Y., M. J. Park, S. R. Ahn, and S. J. Kim, 2009. Watershed modeling for assessing climate change impact on stream water quality of Chungju dam watershed. Journal of Korea Water Resources Association 42(10): 877-889 (in Korean).
Soil Conservation Service. 1972. Section 4: Hydrology in national engineering handbook. SCS.
|
OPCFW_CODE
|
[Gluster-infra] Configuration management, part 2
mscherer at redhat.com
Tue Aug 26 12:56:45 UTC 2014
so as the feedback was not negative for my proposal, I would like to go
forward and propose something more concrete to discuss.
So since there was not much feedback on the tool used and since Justin
said "no ruby if possible", i would propose to go with Ansible.
It is simple enough to deploy ( no agent ), which make it suitable for
the variety of platform we have in the infra, and combining the task of
configuration management and server orchestration, it permit to have 1
system instead of 2 to deploy and learn.
It is quite simple to understand ( more than cfengine IMHO ), and this
is not Ruby ( so justin will be happy ). I am quite familiar with the
tool, and while I tried hard to take "what I prefer", no one express any
preference anyway :)
it is also used more and more inside RH, and on Fedora.
Ansible use root ssh to manage servers, so that requires we have some
authentication setup for that. Since that's the easiest, I guess we can
just set a key for the access, and make sure the key is protected
"enough" ( I would be in favor of rotating it quite often, restrict it
by IP and would use a TPM if we could, but there is no way to do that
with our rackspace infra for the foreseeable future.
This also mean that the access to the server would be restricted and
that we want to audit as much as we can.
So I propose :
- have a git repository, and automated deployment/run of ansible on
commit + cron ( for making sure servers are compliant ). This would run
from a special user on the bastion, who have access to the right key. It
would be nice to have that user as restricted as possible
- have proper security on the bastion. IE, firewall, selinux as
enforcing, potentially even user confined.
- have a strict policy on who can access it, and making sure this
requires strong authentication. We can either do it with regular 2
factor auth, or just password + ssh key, or sudo + ssh keys for access.
- make sure the server do nothing more than this
There is lots of others way to setup ansible, but I do not think they
would be suitable.
For example, we can have the ansible run to not be automated, but
requires a password. This would solve my concern regarding ssh keys, but
I do see that as tedious, and would likely bring more roadblock than it
We can also have ansible-pull, which take the git repository on each
server and apply it locally. This also solve the problem of ssh keys
protection, but this one would cause issues since this only solve half
of the problem ( configuration management, not one off task
administration ), and would mean that every server has access to the
password ansible have access to.
Any comments ?
Open Source and Standards, Sysadmin
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 836 bytes
Desc: This is a digitally signed message part
More information about the Gluster-infra
|
OPCFW_CODE
|
Prestantiousfiction Let Me Game in Peace – Chapter 1329 – Invincible Dodge high-pitched responsible recommendation-p3
Deevyfiction Twelve-winged Dark Seraphim – Chapter 1329 – Invincible Dodge add deserve recommendation-p3
Novel–Let Me Game in Peace–Let Me Game in Peace
how to study for the gre
Chapter 1329 – Invincible Dodge squeal uninterested
The outcome created Zhou Wen even happier. After a secondly circular in the 7th bullet, the Calamity-level bullets still did not success the blood-decorated avatar secured by the Heavenly Robe. Another round’s 7th positive-remove bullet has also been like the very first round’s absolutely sure-kill bullet. It constantly circled the blood-tinted avatar’s body, but it really neglected to hurt or injure the bloodstream-pigmented avatar.
neal rafferty glass houses
sector expander exp 101
Section 1329: Invincible Avoid
Even so, this did not establish nearly anything. He obtained previously dodged the primary two shots and was still destroyed all things considered.
I dodged it… Zhou Wen saw the Heavenly Robe transfer itself. It yanked the blood-colored avatar’s body system to the side, dodging the Calamity-grade bullet.
This is actually the strength of the Heavenly Robe augmented because of the Invincible Fortunate enough Celebrity? Zhou Wen was amazed.
Considering the fact that w.a.n.g Chan was prepared to stay, Zhou Wen didn’t say any other thing. He stayed for the Moon for one more morning. For starters, he wished to learn how Young lady Supreme Yin designed on adjusting w.a.n.g Chan. Second of all, he needed to see if the Invincible Successful Legend was useful.
Nonetheless, this failed to verify something. He experienced previously dodged the first two images and was still murdered in the long run.
The significant seventh taken is finally here… Can I dodge it? Zhou Wen was extremely concerned.
This left Zhou Wen somewhat frustrated. The sure-eliminate bullets ended up being circling him, looking his lifestyle. A single blunder could cause an irreversible outcome. This became a burdensome stress. No one recognized if the bullets would permeate the blood flow-decorated avatar’s physique the really upcoming 2nd.
On the other hand, every time the bullet made an effort to strategy the blood flow-shaded avatar, the Heavenly Robe would use its capabilities to yank Zhou Wen’s physique aside. Considering that the bullet couldn’t attack him, it may possibly only chase after the our blood-tinted avatar. Consequently, it looked like a halo.
Ruthless In A Suit: Book Three
“Lady G.o.ddess, according to everything you mentioned, wouldn’t w.a.n.g Chan, that has the Body of Bane, be extremely effective? Then can she work with the System of Bane to succeed towards the Calamity or Apocalypse amount later on?” Zhou Wen required on w.a.n.g Chan’s account.
I’m finally going to position an end to this very! Zhou Wen wasn’t very lucky for some reason.. Following touring for nearly 72 a long time, he went back to Luoyang.
Zhou Wen didn’t depart while he withstood there experimenting. He wished to know how often he could dodge the photos by making use of Invincible Privileged Superstar along with other fortunate apparatus. He needed to learn how likely it was actually to realize this sort of result.
“Lady G.o.ddess, based on exactly what you said, would not w.a.n.g Chan, having the Body of Bane, be extremely powerful? Then can she work with the Entire body of Bane to succeed into the Calamity or Apocalypse stage sooner or later?” Zhou Wen required on w.a.n.g Chan’s account.
“Let’s place it this way. The attributes of Bane, Expect, Luck, and l.u.s.t are really valuable and particular. By using these capabilities, you might naturally be considerably more robust than common persons and get better prospective. However, how far anybody can go is determined by one’s farming,” described Lady Superior Yin.
hogfather bow sight
Midway through her sentence, Girl Superior Yin glanced at Zhou Wen and suddenly paused. Then, she carried on, “Generally discussing, it is improbable for humans to advance on the Mythical level unless they receive outside guide.”
Zhou Wen experimented repeatedly, although the success left him pleasantly impressed. He dodged the pair of six bullets consecutively. Up to now, not really a one one of those could injure or hurt the blood stream-tinted avatar. Rather, the bloodstream-coloured avatar seemed to have six more halos around it.
Midway through her sentence, Girl Supreme Yin glanced at Zhou Wen and suddenly paused. Then, she ongoing, “Generally conversing, it is improbable for people to succeed towards the Mythical step unless they get hold of outer support.”
The Calamity-quality gunshots sounded once again, but they also still neglected to impression Zhou Wen. The bullets ended up dodged because of the Incredible Robe.
All over again, he spotted the Steel Guards as well as the Gold Battle G.o.ds. Similar to before, Zhou Wen didn’t relocate by any means. His Heavenly Robe danced while he dodged each of the bullets.
The Cathedral Church of Peterborough
“It’s not possible to fully reduce the consequences. On the other hand, it’s not difficult to control them slightly instead of induce far too much issues,” reported Lady Supreme Yin with confidence.
The results created Zhou Wen even much happier. After the subsequent round from the seventh bullet, the Calamity-quality bullets still did not strike the our blood-shaded avatar shielded because of the Heavenly Robe. The other round’s 7th confident-wipe out bullet was such as initial round’s absolutely sure-kill bullet. It constantly circled the blood vessels-tinted avatar’s human body, nevertheless it failed to injure or hurt the blood vessels-tinted avatar.
I dodged it… Zhou Wen discovered the Heavenly Robe transfer itself. It yanked the blood vessels-decorated avatar’s body aside, dodging the Calamity-grade bullet.
The actual end result created Zhou Wen even more content. After a second circular of your seventh bullet, the Calamity-quality bullets still neglected to struck the blood stream-colored avatar protected with the Incredible Robe. The next round’s seventh absolutely sure-kill bullet has also been like the 1st round’s absolutely sure-eliminate bullet. It constantly circled the bloodstream-colored avatar’s body, however it did not hurt or injure the our blood-tinted avatar.
Unexpectedly, Young lady Supreme Yin rolled her eyes at him and said, “Are you dreaming? I only stated that the characteristic of Bane is quite important, having said that i didn’t declare that one would definitely be very strong whenever they had a Bane physique. As well as the Calamity-quality, you humans can’t even attain the Mythical…”
“I desire to remain.” w.a.n.g Chan was very opinionated.
Sadly, there had been no this sort of element as being a reward for achieving the Golden Palace in-game. It was actually unproductive regardless if Zhou Wen showed up there.
Zhou Wen didn’t abandon since he stood there experimenting. He desired to know how many times he could dodge the images with the aid of Invincible Fortunate Celebrity together with other fortunate tools. He wished to recognize how probably it was to accomplish a very result.
Even so, each time the bullet aimed to tactic the blood flow-shaded avatar, the Incredible Robe would use its abilities to yank Zhou Wen’s system to the side. For the reason that bullet couldn’t success him, it may possibly only chase following the blood vessels-decorated avatar. As a result, it looked such as a halo.
|
OPCFW_CODE
|
Table of Contents
- 1 How do you communicate between an app and a website?
- 2 How do I integrate Android apps into my website?
- 3 How does Android app communicate with server?
- 4 How do I connect my Android to the Internet?
- 5 Do mobile apps use rest APIs?
- 6 How do mobile apps connect to backend?
- 7 How do two applications communicate?
- 8 Can Android Apps communicate with each other?
- 9 How do you communicate one Microservice from another?
- 10 Why Docker is needed for Microservices?
- 11 Is Kubernetes only for Microservices?
- 12 What is the difference between Docker and Microservices?
- 13 How do I run a Microservice from the command line?
- 14 What problems does containerization solve?
How do you communicate between an app and a website?
If you want to manage the communication between the website and the android app means you need a centralized server which having the data of your web/mobile applications. So, after that by using the web service methods (REST/SOAP) you can achieve the communication between them.
How do I integrate Android apps into my website?
But when it comes to get together data between an app and a website, you will need to write backend software, which is not so easy, for example in php and then create a database to store the data (with mySQL for example). Then you can access your data from either app and website and decide what to do with it.
How do I communicate between two apps on Android?
Android inter-process communication At the simplest level, there are two different ways for apps to interact on Android: via intents, passing data from one application to another; and through services, where one application provides functionality for others to use.
How does Android app communicate with server?
Intro Guide to Android Client-Server Communication
- The client makes a request using a HTTP POST to a server.
- The PHP script queries the MYSQL server.
- The PHP script gets the SQL data.
- The PHP script puts the data into an array and assigns keys for the values.
- The app parses the JSON and displays the data.
How do I connect my Android to the Internet?
To connect to the network, tap Connect. To change Wi-Fi settings, tap All Networks….Turn on & connect
- Swipe down from the top of the screen.
- Touch and hold Wi-Fi .
- Turn on Use Wi-Fi.
- Tap a listed network. Networks that require a password have a Lock .
How do mobile apps communicate?
Mobile app companies can communicate with their customers using email, in-app notices, or push notifications. Though push notifications may seem like the most convenient option, respondents said they actually prefer email notifications to the other forms.
Do mobile apps use rest APIs?
While RESTful APIs communicate using HTTP in general, for mobile apps HTTPS, i.e. HyperText Transfer Protocol Secure comes into the picture. It’s a form of HTTP, however, it’s more secure, because it uses a secure socket layer (SSL).
How do mobile apps connect to backend?
The steps you need to remember while building a backend for mobile app:
- Write down the backend and frontend responsibilities.
- Decide on process endpoints and get them working.
- Design the API and write it down.
- Design the database.
- Get the backend test script ready.
- Use Programming language to implement the API.
Do Apps communicate with each other?
Apps on your phone may be secretly talking to each other and breaching your security. Beware, these apps may be secretly talking to each other and potentially breaching your security, researchers warn. A study showed that applications on the android phones are able to talk to one another and trade information.
How do two applications communicate?
In a client/server environment, applications must communicate across different platforms. Performs the connections from client to server and server to server, and sends messages for distributed requests. It is a peer-to-peer, message-based, socket-based, multi-process communication middleware solution.
Can Android Apps communicate with each other?
Once downloaded, apps can communicate with each other without notifying the user. But the team found that some apps exploit this feature to gain access to data they shouldn’t be able to.
How do APIs communicate with each other?
APIs communicate through a set of rules that define how computers, applications or machines can talk to each other. The API acts as a middleman between any two machines that want to connect with each other for a specified task.
How do you communicate one Microservice from another?
There are two basic messaging patterns that microservices can use to communicate with other microservices.
- Synchronous communication. In this pattern, a service calls an API that another service exposes, using a protocol such as HTTP or gRPC.
- Asynchronous message passing.
Why Docker is needed for Microservices?
Docker is an open platform for developing, shipping, and running applications. Using Docker, it is easy to create required services separately and manage them as microservices without affecting other services. This is one biggest advancements in the software industry where we used to have big, monolithic code.
Why is Docker good for Microservices?
Docker allows you containerize your microservices and simplify the delivery and management of those microservices. Containerization provides individual microservices with their own isolated workload environments, making them independently deployable and scalable.
Is Kubernetes only for Microservices?
“Kubernetes: Not only microservices, but also high performance workloads” Ma explains how Kubernetes has become the dominant container orchestrator, what his favorite K8s feature is, and where it might go in the future.
What is the difference between Docker and Microservices?
We will understand the difference between Docker and Microservices by an analogy. Docker is a Cup or in other words Container whereas Microservice is the liquid that you pour into it. You can pour different types of liquids in the same cup. Similarly, you can run many Microservices in same Docker container.
How do I run Microservices in Docker?
Test the Microservice
- Use Docker Compose to build all of the images and start the microservice: cd flask-microservice/ && docker-compose up.
- Open a new terminal window and make a request to the example application: curl localhost.
How do I run a Microservice from the command line?
- Generate the code. If you haven’t already done so, then you’ll need to follow the instructions to install the IBM Cloud Developer Tools CLI, if you want to try things out for yourself.
- Run the microservice. The CLI is then used to build and run the microservice locally.
- Deploy to IBM Cloud.
What problems does containerization solve?
Docker solves problems like: missing or incorrect application dependencies such as libraries, interpreters, code/binaries, users; Example: running a Python or Java application with the right interpreter/VM or an ‘legacy’ third party application that relies on an old glibc.
|
OPCFW_CODE
|
How do you feed 'str' data to decision tree without one-hot encoding
I am working on linguistic data to train a classifier (decision tree). Data is in csv format, tab separated and it has 62000 rows and 11 columns.
Data Sample:
target_lemma target_pos left_word left_word_pos right_word right_word_pos parrent_word parrent_word_pos arg_word arg_word_pos label```
form VBZ %% %% forms VBZ forms VBZ forms VBZ N```
form VBZ provINce NN %% %% forms VBZ forms VBZ N```
form VBZ The DT %% %% forms VBZ provINce NN N```
In this data the Null is replaced by %%.
First 10 values are features
Last value is the label which is either N or Y.
Decision tree gives an error as it expects the features to be int or float values. To resolve this issue I have encoded the data with one-hot encoder and it works fine on the data which is split in 80,20.
The real problem occurs when I give it a user input without the label. I convert the input into one-hot encoded data and pass it to the predictor.
It gives me a value error saying the number of features does not ma n_features is 11823 and input_features is 10.
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.feature_extraction import FeatureHasher
h = FeatureHasher(input_type='string')
balance_data = pd.read_csv('identifier-tab.csv',
delimiter='\t',
encoding="ISO-8859-1")
# Splitting Dataset
Y = balance_data.label
X = balance_data.drop(columns='label')
X = pd.get_dummies(X)
Y = pd.get_dummies(Y)
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=100)
print(X_test)
# Gini
clf_gini = DecisionTreeClassifier(criterion="gini", random_state=100, max_depth=9, min_samples_leaf=9)
clf_gini.fit(X_train, y_train)
y_pred = clf_gini.predict(X_test)
print("Gini Predicted values:")
print(y_pred)
print("Gini Accuracy: ", accuracy_score(y_test, y_pred) * 100)
# Entropy
clf_entropy = DecisionTreeClassifier(criterion="entropy", random_state=100, max_depth=3, min_samples_leaf=5)
clf_entropy.fit(X_train, y_train)
y_pred = clf_entropy.predict(X_test)
print("Entropy Predicted values:")
print(y_pred)
print("Entropy Accuracy: ", accuracy_score(y_test, y_pred) * 100)
# User Test (DOES NOT WORK)
xx = "present JJ peculiar JJ %% %% written VBN character NN"
x = xx.split("\t")
data = pd.Series(x)
print(x)
print(data)
data = pd.get_dummies(data)
print(data)
user = clf_gini.predict(data)
Any suggestions or code help would be great!
Try using random forest so each estimator only takes into account a small subset of features
Do you actually use the FeatureHasher you defined? Also I am not sure why you use lower-case x and y in the train_test_split when you define it before in capital letters.
Regarding the problem of your user input. You apply the one-hot-encoding only on your given data which results in an extra feature for each unique value in your categorical features. When you have your user data, you only have one unique value in your categorical features and apply an extra one-hot-encoding which results in only one one-hot-encoded feature. So you should call get_dummies() on the combined data to make sure the encoding matches.
However, I dont think that one-hot-encoding is a good choice here as it seems that your categorical features contain many unique values resulting in a very large amount of features (11823). So you might think about using a OrdinalEncoder, e.g. from scikit-learn
When you don't want to/ or cannot combine user input and your known data, think of adding an extra encoding for "unknown" values.
I tried to use the FeatureHeader but it didn't work for me. It works on the dictionary data I guess. Can you please provide a code snippet?
Your problem is that you encode your user data separately from your train/test data. This causes that the numbers of feature differ and that your user data cannot be classified. You can find code snippets on scikit-learn for how to use encoders
Please also remove code parts which are not used, next time.
|
STACK_EXCHANGE
|
7 Tool for Cross-Platform Mobile development. 3 Trends Shaping Mobile Development in 2016. 3 Trends Shaping Mobile Development in 2016 Last year saw the emergence of forces promising a tectonic shake-up of the mobile app development industry in 2016.
The original version can be found here. When building for mobile devices you can take one of three approaches: Web apps, Hybrid Apps or Native apps. Web Apps are websites designed to fit in mobile screens and are accessed by typing a URL in the phone’s browser. There are multiple UI frameworks that can make your Web app look like a native app, or you can build it using media queries to make it responsive to the web browser’s dimensions.
Hybrid Apps are Web apps that are packaged in a custom full-screen browser to resemble a native mobile app, with extensions that provide access to some hardware features, but your user interface is still written in HTML/CSS and rendered by a web browser. Native Apps are written using different languages depending on the platform. From 10,000 feet, PhoneGap and Titanium appear to be similar. But that’s really where the similarities end. The future of mobile app development - The Official Microsoft Blog. It is incredible how much has happened since Xamarin joined Microsoft just over a month ago, starting with Scott Guthrie’s Build 2016 announcements that Xamarin is now part of all editions of Visual Studio at no additional charge — from Community to Enterprise — and our plans to open source the Xamarin SDK.
It is a dream come true for us to be able to put the power of Xamarin into the hands of all developers. In just the first two weeks since Build alone, we helped nearly 3.5 times more developers get started building great apps with Xamarin than ever in our history as a company. Now we are at Xamarin Evolve 2016, the world’s largest cross-platform mobile development conference, in Orlando. This morning we open sourced the Xamarin SDK and launched new ways to make Visual Studio the most complete mobile development environment. We also launched new ways to build native, cross-platform apps faster than ever using our popular cross-platform UI framework, Xamarin.Forms.
Learn more. PhoneGap Vs Titanium: A Guide for Enterprises. Posted in - Mobile App Date - 27 Aug. 2014 Business startups as well as enterprises aim to effectively build high performing, low cost and time efficient mobile apps.
Today, cross-platform development plays a big role in delivering consistent and good user experience across multiple mobile platforms like Android, iOS, Windows and other platforms. To enable cross-platform app development, there are essentially two strategies: Use Adobe PhoneGap/Cordova to create mobile apps by using an embedded browser control and write a web app wrapped as a native appUse Appcelerator’s Titanium as a cross-platform tool to create native apps Both are open-source frameworks that help enterprise mobile app developers easily create apps on multiple platforms, as developing a single native application for each platform is an expensive affair. PhoneGap vs Titanium Conclusion. Mobile Frameworks To Develop Multi-Platform Apps - 26 Items.
|
OPCFW_CODE
|
Linux software suggestions
ambassador at fourthworld.com
Sun Sep 20 20:50:02 EDT 2015
> I'm about to turn a Dell 5150 which is sitting collecting dust into my very
> first Linux based machine!
> This will be an open source software only machine.
> I've always been Windows based so have decided to go for Mint with Cinnamon
> distro as it looks like it will be easier for me to transition too.
> Apart from LiveCode Community what others Open source software would those
> of you who are Linux based recommend?
I use Ubuntu, initially because it's what my customers were using when
they were asking for a Linux version of one of my apps. Over time I've
come to appreciate that it's the most popular desktop distro, so as a
developer I find that comforting. But over time I've met many of the
people who make it, so using it feels like something made by friends,
like having a neighbor bring over a loaf of fresh-baked bread.
But that's the beauty of Linux: it's all made my friends, people who
are for the most part easily reachable, and by the nature of their work
predisposed to sharing. And the work is done within project structures
where you can lend a hand if you're so inclined, in just about any way
that matches your skill set, not just code but also design, docs, and
more - just as we're beginning to do in the LiveCode community.
Mint is also a great distro, and Cinnamon gets consistently good
reviews. Hard to go wrong there. That's another great thing about
Linux: so many different flavors, with so many different options for
setting it up, that everyone gets exactly what they want.
> E.g. best email client, office suite, ftp client, graphics prog, browser,
For email I switched to Thunderbird a decade ago, back when my work was
done almost exclusively on Mac. It's available for Windows and Linux
too, and uses the same standards-based mbox format on all three
platforms so you can move your email from OS to OS easily if you need to.
Office suite: LibreOffics, hands down. It's a fork of Open Office
(after Ellison bought Sun and starting creeping people out with this
FOSS management), and today has far more contributors than Open Office.
LibreOffice is a great package, pre-installed with Ubuntu and probably
with Mint as well. And you're in good company: the most recent
large-scale convert to LibreOffice is the Italian Ministry of Defense,
who just moved 150,000 desktops from Microsoft Office to LibreOffice:
FTP: FileZilla. Annoying UI in some respects, but also configurable to
become much more useful and cleaner than its default layout.
Graphics: GIMP is a truly great tool, more than capable of handling the
needs of probably 90% of Photoshop users if only they'd earnestly give
it a try.
A relative newcomer to Linux graphics is Krita - gorgeous UI, probably
closer to Painter in its focus than to Photoshop, well worth exploring.
For vector graphics try Inkscape. I've met the lead dev at the SoCal
Linux Expo, a hard-working yet humble man who's put some wonderful
capabilities into the package, with a strong following keeping it
growing nicely. Like GIMP it's also available for OS X and Windows, so
you can use one format on all platforms.
Browsers: Only IE and Safari are platform-specific. Chrome, Firefox,
Opera, Dolphin and others are multi-platform. Use whatever you enjoy.
I split my time between Chrome and Firefox myself.
Text Editing: Lately I've gone back to Geany, but my needs are modest
enough that I'm considering pulling a half-baked text editor I started
in LiveCode out of the archives to see if I can find time to flesh that
out into a usable state as well (it'd be nice to have one editor for LC
simple package that works exactly as I want it to). But there are many
available, and no matter which GUI one you use there's good reason to
explore at least Nano for editing files on remote servers, or learning
vim or emacs if you have time. But don't be ashamed of using the humble
Nano, it's a decent command-line editor with a close-to-zero learning curve.
> Also how is LiveCode doing with 64bit Linux, any problems or parity issues?
Yes, 64-bit for all the reasons others have noted here.
Please keep us posted on how your Linux explorations go. Part of the
reason I got started with Linux was to shake the cobwebs out of my head
after spending too many decades with just one OS, a chance to think
really different. I hope you find your Linux adventure as rewarding.
Fourth World Systems
Software Design and Development for the Desktop, Mobile, and the Web
Ambassador at FourthWorld.com http://www.FourthWorld.com
More information about the use-livecode
|
OPCFW_CODE
|
Create an image conversion system
Allow a user to be able to convert image types such as png to jpg, jpeg to png etc.
Hi, I'd like to try this
@foobar41 Hey! You're welcome to try it. If you create your own fork for the repository, you can start making changes. If you look at the How to Contribute section of the README, it will explain the steps for how to add a new feature to the site by cloning the repo, making a new folder, adding the link to home etc. If you have any problems or questions let me know, I'm happy to help!
What framework are you using for the server-side? Converting format in client-side using javascript seems to give very few options
Hey! Currently no server-side code is being used, as the site is being hosted with GitHub pages which does not support server-based applications.
From: Akhil Galla @.>
Sent: Monday, June 27, 2022 3:17:08 AM
To: THHamiltonSmith/webdevhub @.>
Cc: Thomas Hamilton-Smith @.>; Author @.>
Subject: Re: [THHamiltonSmith/webdevhub] Create an image conversion system (Issue #1)
What framework are you using for the server-side? Converting format in client-side using javascript seems to give very few options
—
Reply to this email directly, view it on GitHubhttps://github.com/THHamiltonSmith/webdevhub/issues/1#issuecomment-1166599333, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AEYBD645HR3YOMV5SOPYMGDVRCJRZANCNFSM5Z2I6OFA.
You are receiving this because you authored the thread.Message ID: @.***>
Thanks for letting me know, I will try a different method then
If you can’t figure out a way without using server side such as node, maybe try something else or put it off.
If the project gets enough traction, I will consider paying for a VPS to host the site on so more content can be added to the site. If this happens, we can use node to make adding stuff like this easier.
From: Akhil Galla @.>
Sent: Monday, June 27, 2022 3:30:02 AM
To: THHamiltonSmith/webdevhub @.>
Cc: Thomas Hamilton-Smith @.>; Author @.>
Subject: Re: [THHamiltonSmith/webdevhub] Create an image conversion system (Issue #1)
Thanks for letting me know, I will try a different method then
—
Reply to this email directly, view it on GitHubhttps://github.com/THHamiltonSmith/webdevhub/issues/1#issuecomment-1166602254, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AEYBD6YENT3MUI3Q6WSXQNLVRCLCFANCNFSM5Z2I6OFA.
You are receiving this because you authored the thread.Message ID: @.***>
All right, I will let you know soon
@foobar41, the site is now hosted on Heroku and uses Node.js, so if you would like to take another crack at the image conversion system, I have re-opened this issue.
Sure, I'll give it a try
@foobar41 yep that sounds fine. Maybe add a directory in the /public folder called /uploads or something like that where the images can be stored. If possible, try adding a timer that will run and delete the image after 15 minutes so it doesn't clog up storage and for user security.
Great work! Let me know if you need any more help.
@THHamiltonSmith shouldn't the converted file be stored in something like a DynamoDB storage, where it will be purged after a given timeout?
We can use node to delete files with a timer, btw I don't know how to use DynamoDB storage 😅. What I am thinking is deleting current files in /uploads and then storing the converted file in the directory, that way there will always be a single file in that directory without taking up much space and this file will be deleted in the next operation. What do you think of this? Should I use a timer to delete files? Is there a better way to do this?
That will be good for now. If we encounter problems once the site gets more traffic we can switch up how we do it to be more efficient. You could also add a timer to delete the file automatically after for example an hour if you like.
|
GITHUB_ARCHIVE
|
We're paying $1120/year for 14 active users. It's a lot but we definitely rely on Slack. This is the one SaaS app that I feel like we're getting true value out of.
For the "Standard Plan" - $1,404.80 total annual ($1,280 for 16 users and $124.80 sales tax).
Plan fluctuates monthly when people are added or removed. It does a good job of automatically doing this.
We are a large financial company. Our initial quote was $12/user/month, and we got them to drop it to $6.50/user/month.
Having over 50 people was never enough to get a discount, and their price for a chat is outrageous, so we are forbidding storing important information in slack messages (a good practice wether you pay it or not) and then using the free version.
Our team of 25 people pays $200/month for the standard plan. You can save more if you go with an annual contract however we have stuck to the monthly plan at this time. However, some months we have more active users than other months, causing this price to fluctuate.
We work a lot with freelancers and couldn't justify the price to have them join so we decided to create larger channels with channel guests or multi channel guests.
On Business Plus, $208.70 per user (850 Users One Year). It's an unadvertised tier between business and enterprise that gives some additional features, including the eDiscovery API for compliance.
Enterprise Select level license, $240 per seat, 80 seats. Not a well known or advertised service level, specifically designed for regulated firms without major IT staff.
Our company with a little less than 3000 people were quoted $1 million dollars per year (~$27 per person per month). We used their free plan until they were going IPO, where they needed the sale, then lowered it to $10 per person per month.
Not much since Slack started offering discounts to Indian users. Our per user pricing works out to about $5 per user per month.
$3.20 per user per month, India price
We are paying $0 for 4 users. We are still in the free tier, which is great.
We're a small team and paying $10/mo per user for the first year
$15 a month each for 20 users.
We pay $115 a month for 8 users
220 employees, paying £63 yearly per user.
We initially were paying monthly, but this solution was more efficient
Our team of around 25 pays $8/month per user.
$31,295.64 for a year contract with 390 users with a year contract.
Unable to negotiate a discount, but were able to be "reimbursed" ( = credit) for bot users that used "real" accounts.
We’re on the Complimentary Non-Profit standard plan. We’re a team of about 15.
We get search, unlimited apps, 15 person video call, and single/multi channel guests.
We pay $8 per month per user. Under 25 total users.
$6.67 per user per month. It's Slack, we would be useless without it.
$3 per user for the Plus service, 'friendly' price
We pay $247.73/month for 29 users, totaling about $7k/year.
We use the free tier which is sufficient for us.
I pay $8/month per user for Slack Standard plan, and we have 13 users currently
Our team of ~350 is paying $12.50/user/month on their Plus plan (list price). We've looked into Enterprise but they haven't made it easy or quick to get a quote.
Standard list price for Slack, paid annually it comes out to $80/yr/user.
We paid $200/mo for 8 seats and didn't negotiate. It is a great service.
We pay $135 per month. We feel it's a little bit expensive so we plan to move to another communication tool.
Currently we're using Slack for free.
We have negotiated a deal of 60% off on our total pricing on Slack.
We are paying list price for team of about 200. It's the plus plan which is $12.50 per user per month.
We are a team of 10 engineers. We explained our story of zero profit and no funding. They provided a discount of 30% off pricing.
Currently just one user on the free tier account.
I'm on the free plan right now because that's actually what fits my needs.
We pay for 8 users a total of $693.76 annually. We are on the Standard plan and are billed annually.
$400/yr for 5 active users, didn't negotiate. We get billed immediately when we add new users to the team, paying retail pricing on those.
Our team of 2 uses the free tier ($0) for Slack. We didn't need to use the paid version.
$26k/yr for Slack for ~150 users. Only discount was paying annually in advance, nothing on the basis of volume of seats.
We have 12 employees and we found our selves paying $150 a month for for them due to Slack integrations and plugins.
We are paying $12/user per month for 60 people.
$8.00/month. I've worked with companies with higher headcount, though, that have had different pricing structures.
We pay a base price, they don't want to give us discount for 200 users.
The standard plan is fine.
34 people for 2550€ per year for the standard licence
We have had great success at setting up "SlackOps" with Slack's Standard $6.67 per person, per month plan with our 52 person team.
Team of couple of dozens, we're using it for free.
I am paying $0 for Slack - free tier. 2 users for now.
We are a small company, on the basic plan paying $8 per month per user.
It's not clear, but it looks like we're paying on a base of €7.50/user/month, for 25 users. We're on a monthly contract.
Team of 20 people, paying $300/mo.
Around $2/month per user 300 people, with India team price
$9 per active user per month, currently using three seats
Plus plan - $127.50 per user per year for 1500 users (15% discount)
Discounted because of volume of users.
We have 15 users on Slack.
We pay $1200/year for that.
The nice thing is if someone stops using it (leaves the company) they automatically reduce the licenses needed and credit that amount towards future usage.
We are using the free version of Slack.
We're paying $42.50 for 5 users.
I use the basic free plan.
Only basic package for 27 users @ $8/month. We pay $216 monthly.
We use the standard package, so $6.67 per person, which is identical to official website.
We pay around $10 per seat for Slack for our team.
Slack is an amazing product, especially if you don't ask for history search, it is pretty fine with the free plan. Our team is still on the free plan without bothering.
We have over 10,000 employees working at our company, but 70% of them are single-channel guests on Slack, so we only pay around €30k/mo for Slack Standard plan for around 4,500 users.
Our team is 5 people and we pay $15/person/month for the Plus plan. From my understanding this is the "standard pricing" and was wondering if anyone with a small team size managed to get some sort of deal with Slack.
Free. We only use it for light access to 2 connected apps and 8 team members.
Our team of just under 50 paid $392, $394 and $382 over the past 3 months.
We have the Standard Plan for 100 people - $809 per month
Free. On a shoestring budget overall, cant afford the premium plans.
$60 for 10 users. Small team, so nothing extra there.
Being charged $6.67 USD/month per person billed annually as per Slack quotation
Our team of 9 pays $60/month for Slack.
We are using Slack free. The limit is reached but we don't need to access to old files, because all our team use Dropbox, so Slack is used just for chat.
$7000 per year for a team of 90. This is a basic tier and we haven't negotiated anything.
For < 30 people we pay $80 / year / person. I don't know what happens at scale, but it is a non-cost essentially
Our team of 4 pays $56 a month for Slack premium.
Standard plan for 6 users is $51/mo. Touchless billing with a credit card. Not really a story to tell. :)
Was paying $6.67/month/team member with my last company, team of 10 with a few contractors.
I use Slack for my entire team of 12 but I just went through the pricing page, no discount hacks unfortunately.
Our team of 5 people pays 31$/month for Slack. We didn't negociate anything.
I don't have all the details, but my company of about 3,000 employees pays ~$450,000 annually for Slack.
Standard plan, 6.67 per month per user when paying annually.
$72 USD for 9 users + tax to get access to shared channels - this feature alone is worth it.
We upgraded to Slack's Plus Plan for SSO support.
We pay $12.50 (list) per user per month for 65 users.
It took one invoice before I figured out I was paying for guests as full members. If you can manage with externals just having access to one channel they'll be free. Otherwise, I've been paying $8 monthly per member for Standard Plan.
I recently added 1Password and Backblaze to my "home stack" to coordinate thing within my family. In addition, I use: 1Password - So my wife and I can share passwords Backblaze - Automated ackups...
|
OPCFW_CODE
|
Looking for Google api search zip code using php Freelancers or Jobs?
Need help with Google api search zip code using php? Hire a freelancer today! Do you specialise in Google api search zip code using php? Use your Google api search zip code using php skills and start making money online today! Freelancer is the largest marketplace for jobs in the world. There are currently 17,764 jobs waiting for you to start work on!
Geocoding with PHP and Google API - Search by zip code nearest.
...longitude and latitude usingGoogle Maps API.
I already have the site ready with the search page and results...script!
Initial Search Page
...asp.net/vb, and I want to perform a google place searchusingzipcode and keyword for nearby places. I...phone number and maybe even plot a nice little google map.
PHP/MySQL: Search using zip code/Integrate Google maps
I need a search page created that will search a MySQL database and get result with a specified distance...zipcode. All results should be shown on a google map. Search results will display different depending...value in the database.
I need a search page made. I will give successful bidder fields
...has the zipcode of that address.
Let's say, when an user wants to give his zipcode and search the nearest...nearest consultant, using that zipcode in google map, it should search and return the nearest consultants
Create a 'Where to Buy' zip-code-search application
...our website that allows a visitor to enter their zipcode and return locations where they can buy our products...Experience with Google Maps API is necessary.
Our site was built in Wordpress, so you'll need php/mysql experience
...forms that I will be collecting addresses with using Gravity Forms through Wordpress. The address is...functional search form that goes with the wordpress theme currently being used that allows for a zipcode search...the listings. I need to be able to search for listings via zipcode that are within a 5 mile, 10 mile
...solution using the GoogleSearch Appliance. The webpage will look similar to this but written in PHP <http://www...program(s) in executable form as well as complete source code of all work done.
2) Deliverables must be in...Agreement).
Yelp API expert to build website by zip code search
We are looking for a Yelp api expert for wordpress, joomla or recommended cms.
We want a plugin...category. User can make a search based on location (street or city) or zip. It should retrieve all the...the listing form that search term(street or city or zip) with business name and  business address and
...have issue with location searchusingzipcode. when i search with zipcode if store is not in that location...location does not show any result. i want to search with zipcode show list within 100 miles. we only have
A website to List Businesses by Categories and search by City, Zip Code, and State
...following should be included:
Search field should have: City, ZipCode, State, and for them to type
...for: example, mortgage broker
Map location usingGoogleAPI
Featured Listing - for client that I may...sites you have done
that has to do with listing/php/mysql and so on.
Also let me know if this will
I have a simple zipcode radius search that is working okay, but on some searches is not returning the...need a PHP/ MYSQL expert to take a quick look and fix it.
I am using this script:
ZIPCode and Distance
Integrate Google Local search results into site using API
...that's coded in HTML/PHP. We have a feature on the site to find local taxies usingGoogle Local. Currently...link to Google Local's webpage, but we want to integrate the search results into the site using Google's...Google's API so we can style the results using CSS and put it within our design template. If you have a b...
Form Submission With Results Based on Zip Code using Google Map API
...on the customers zipcode, the form will return the nearest location to their zipcode. We have an excel...determine the distance, however we will need to use Google Map API to determine what location is the closest to
...to create a custom search engine using either Yahoo or GoogleAPI.
The search needs to operate like...like any other search engine, searching page content as well as PDF files etc, however, it will only return...return search results from sites defined by us.
The sites will be input into a mysql database and
...interfaces with Google Maps. I prefer it to work with the Google Maps API. I can get an api key at any time...is in php, and the following line will be passed from the 1st page to the page with my Google Maps flash...content="10;url=index2.php?email=<? echo $_REQUEST["from"] ?>&name=<? echo $_REQUEST["name"...
...have a zipcode radius search created for Mosets Tree.
The module must search categories by zip code...and distance from that zipcode as selectable parameters.
Mapping via GoogleAPI must be included.
...that will check if multiple domains are indexed in Google. I want to be able to submit a list of domains...indexed or not indexed in Google. I think this might be possible using the Googlesearch AJAX. If you have other...program(s) in executable form as well as complete source code of all work done.
2) Deliverables must be in
I need some work done using the GoogleSearchAPI. I need a PHP script that will: 1. Have a web interface...processed simultaneously. So if I want to do 10 search queries, I want to enter a list of 10 items (pasted...done when I press enter. 3. For each search term, query the API and return the first 1K results. 4. De-dupe
...Required: Add a ZipCode Proximity search to existing search options.
Programming: php and mysql
++Primary...the zipcode proximity search option.
SEARCH FUNCTION OPTIONS
1.) I need to add a U.S. zip code...code proximity search (in miles) added to current search options.
- I can supply U.S. zip
|
OPCFW_CODE
|
New GSI Analysis System GO4
Lean Analysis LEA v1.2
LEA has been developed at GSI for analysis of accelerator beam in the framework of the therapy project. It turned out to be a multi purpose, versatile and easy to use analysis tool which may be useful for small experiments. LEA runs currently under OpenVMS, AIX.,and Linux. Other Unix platforms could be supported in the future. The visualisation is done by IDL (Interactive Data Language) developed by Research Systems Inc.. A developer license is required. The software is written entirely in C.1. Current Status
LEA provides the same command line user interface as MBS, the standard data acquisition system at GSI. LEA is currently single threaded, i.e. no parallel I/O, display and command execution is possible. That means that I/O is started by commands and one has to wait until it is finished. Then the display menu can be called to look at the histograms. The display menu must be quit to enter line commands.
The standard part of the analysis program LEA includes the histogram manager from MBS, i.e. the handling of histograms, like create, show, clear, delete, etc. Histograms cannot be indexed. Histograms can be dumped and retrieved in standard GSI format (gef) to text files and processed by GOOSY, SATANGD or PAW. The histograms are organized in a piece of memory called base. A complete base can be dumped and restored. Currently the base is located in local memory and therefore lost on program exit. (However, an automatic dump on exit can be easily implemented). Additional functionality is in the data I/O, the graphics (peak find and fit) and the calling of user routines.
Data input/output is provided online from data acquisition system MBS, and from and to files. There are commands provided to inspect, analyse and copy raw data files formatted according the GSI event data format.
Commanddisplay starts the display using the IDL graphics. The display is controlled by a GUI. The list of histograms is available. User defined pictures containing several plots can be displayed in the GUI.
Once the display has been started, one may enter IDL input mode by LEA commanddisplay -loop. The prompt changes to idl> and IDL commands can be entered. Histograms can be referenced in IDL.
You may execute IDL commands from your routines by callingf_his_idl_exec(string), where string is an IDL command.
Besides the full graphics capabilities histograms can be displayed in LEA by commandplot histogram.
There is a simple tool for fitting Gauss shaped peaks in histograms. If there are two or more peaks a background can be optionally subtracted. The backround is fitted as polynomial (maximum order 2) to the reagions between the peaks. The peaks are searched. The minimum required distance between a peak and a valley can be adjusted. Fitting can be done in the display menu, by LEA commandfit histogram or by LEA command plot histogram .. -fit.
A display frame for scatter points can be created in the menu or by command scatter. IDL must have been started before. The command specifies the limits and letterings. Then in the analysis routine one may call routine f_his_scat(x,y,c,p) to plot one point in the specified color.
The type event command reads events from three input sources, MBS transport, MBS stream server, or file, and prints formatted data. This command works the same as the MBS command.
The type buffer command is similar to type event but outputs differently formatted events to terminal.
The Copy event command copies events from the input sources into standard GSI formatted raw data files.
The analyze event command gets events from the input sources and calls the user analysis routine. Optionally this routine may return new events, which are then written as GSI formatted raw data files.
The type file command dumps files similar to the VMS DUMP command.
The user must write his analysis functionf_anal and an initialisation function f_anal_init. Typically both are kept in one file to be able to share variables. In f_anal_init the user also may define his own commands. Histograms are defined in a histogram definition file. From this file a procedure to create the histograms and include files for definition and initialisation are created. Accumulation can be done by macros.
2. Necessary Developments
To overcome the current shortcomings some work would be necessary.
The program must be runnable in three threads. One for the display, one for the command interface, and one for the current executing command. The main thread does all the initialisation as before, i.e. defines the commands, callsf_anal_init, and executes the initialisation command procedure where the user can create data bases and histograms and execute user commands to initialise his analysis. Then the graphics is started in a second thread. It creates graphic output windows which can be filled in the analysis routine f_anal and the menu window. The main thread now waits for command input. When a command arrives (from terminal or a GUI), the command will execute in a third thread and the interface is immediately ready for commands. Some commands may execute in the main therad, i.e. to stop a running thread. Only one running command thread should be allowed.
For convenience a graphical user interface is necessary. It can be written in any method, because it will communicate with LEA through TCP sockets. It would start LEA in the background or on a remote machine where the command prompter then waits for commands from a socket. All actions of the GUI will result in command lines sent to the prompter. Some prototypes of GUIs have already been built for LEA and MBS. One version uses the IDL widget library, the other Java. Java seems to be much more powerful and better suited because of the easy IP handling and the multithread support. There remains the problem that the look & feel of the GUI and the graphics would be slightly different. This problem can only be solved if one finds a package which is suited for GUIs and for graphics.
One problem of remote execution frontends is to get status changes back into the GUI. In the MBS system this is achieved simply because most of the status information is located in shared memory and can be sent by server tasks to a GUI thread which can update the widgets. All messages generated by MBS pass one central task which again can send them to a GUI thread.
The whole multitasking system of MBS could be ported to other platforms like Linux. Then LEA could use the same mechanisms because it is written as an MBS task.
As an alternative one could implement feedback mechanisms exactly addressing the requirements of LEA.
To achieve a more uniform outlook it should be investigated if there are packages on the market providing good GUI building and graphics. It would be extremely nice, if that software would be PD. Candidates could be tcl/tk or free Java packages providing the graphics. The latter seems to be the most promising way, especially because the GUI for MBS is under way written in Java. This would allow for a uniform interface for MBS and LEA.
It can be very useful to run an analysis directly in MBS. This includes the possibility to filter events by software as well as to execute control functions as a result of event analysis. MBS already provides this functionality. Because LEA has been derived from MBS, it is only necessary to port it back. In MBS there is already a histogram server sending histograms on request to clients. The LEA display must get an interface to this server to access remote histograms.
Besides the modifications to the LEA kernel and the GUI, there are some other features currently missing which might be necessary even for small analysis tasks.
LEA could be serve as a small, easy to use analysis tool, well integrated in the MBS data acquisition system. It can run inside MBS, or on other Unix machines either online or offline. There could be a uniform GUI for both, MBS and LEA.
GSI Helmholtzzentrum für Schwerionenforschung, GSI
|
OPCFW_CODE
|
The spatial ion mobility-scheduled exhaustive fragmentation (SIMSEF) module can be used in trapped ion mobility (TIMS) imaging workflows to schedule fragmentation spectra acquisitions for every detected feature across the tissue. This achieves high MS2 coverage and allows more confident compound annotations. To use the SIMSEF tool, the prmMALDI prototype instrument control software is required, which is distributed by Bruker Daltonics to cooperation partners. You can contact your Bruker representative to ask for the prmMALDI prototype and refer to SIMSEF.
- access to timsTOF fleX instrument
- access to the prototypic timsControl 4.1/5.0 with the prm-MALDI option (contact your Bruker representative)
- Download SIMSEF acquisition tool
Note: acquire MS1 data with a smaller laser spot size than raster size (= stage movement/pixel size) Note: Due to the size of IMS-MS imaging data, we recommend to use 10,000-40,000 pixels in your MS1 run for SIMSEF experiments
this leaves some tissue for MS2 acquisition
Prepare the sample following your default procedure.
- Setup the imaging run in flexImaging. Set the raster size bigger than the laser spot size in your imaging method (e.g. laser spot size = 20 µm, raster size = 50 µm).
- Save the imaging run via "Save imaging run as...". This will store the geometry files for later use.
- Restart timsControl (prm-MALDI prototype) to make the instrument recognize the geometry files. Recalibrate the instrument (recommended).
- Start the MS1 imaging run.
- Copy the raw data to a powerful data analysis machine (Windows and Linux support Bruker raw data)
- Analyse the MS1 data and filter it to your liking ( see imaging worflow). If your IMS seperation allows, filter for isotopic peaks.
- Run the SIMSEF scheduler. Using the preview, evaluate the fragmentation schedules and optimise the parameters. Check the task manager for running tasks. The scheduling may need some time.
- Copy the schedule to the instrument computer. (Must be the same path as selected on the analysis computer, not including spaces or special characters)
- Load the instrument method, you want to acquire the MS2 spectra with. Make sure the method is
appropriate for your expected fragment ion m/z range.
- Disable MALDI mode, enable PASEF mode, ensure that "advanced" is disabled for the collision energy settings.
- Enable MALDI mode, save the method.
- Set MS mode to MS/MS in the MALDI settings.
- Set instrument to operate and recalibrate if needed (recommended).
- Select the correct geometry file on the instrument computer (saved in step 3)
- Run "simsef_pewpew.exe" on the instrument computer, select the acquisition.txt and click "Run acquisition."
- After the acquisition, copy the folder to the analysis computer.
- Import all MS2 files
- Run Mass detection with the centroid mass detector and noise level 50-100 (starting point, for MS2 only!)
- Pair the MS2 spectra to the image features using the Assign MALDI MS2s to features module
- Annotate using any annotation module in MZmine
|
OPCFW_CODE
|
How to read a film color response chart?
This is partially a follow up to my first question where people provided information about color response charts.
I was wondering how to read them. Take for example this one from Kodak:
This image is taken from their material is owned by them.
How will looking at these charts help me understand any sort of bias that their film has (for example, how will I know that their reds will be stronger)?
Hopefully someone more knowledgeable than me will complete my response. The little I know about these curves follows:
1) Characteristic Curves: The more a negative is exposed, the darker it gets. The log-log density/exposure curves simply show how dark the film gets in response to more exposure. These lines do not coincide on color negatives, because of different reasons: the dyes may not have the same response rate, the dyes are superposed and thus there is light falloff from one to the other, there may be color filters between layers that absorb some of the light and finally the film base may have some color as well. On slide films with perfectly transparent film base, these do reasonably coincide, though... Zero density means the film is completely transparent. You can see here that the film base of Portra 160 is not transparent, even when unexposed. The film base should actually look very red (it does), because when unexposed it transmits much more red light than blue and green, and it absorbs much more blue and green light than red. You can read about light absorbance here. Another point you see clearly in this chart is that, even when overly exposed, the maximum density of this film (DMAX) won't exceed 3. This means almost all consumer-level negative scanners can scan this film with no problem. Higher DMAX scanners are only useful with very dense slides or B&W negatives. In my experience an object with a density of 4 looks almost opaque unless under strong light.
2) Spectral Sensitivity Curves: The film's photo-sensitive layer is composed of three dyes that respond to three different light spectrums. These curves show the spectral sensitivity of each of these dyes across the visible light spectrum (390-700nm). Ideally, this chart should have an overall sensitivity curve as well, but this curve is not provided here, so we can't clearly deduce how responsive the entire color emulsion is at different frequencies. Don't forget that you should look at this chart with negative colors in mind. E.g. The cyan dye actually responds to red (cyan-forming = red-sensitive). You can see that the film may have a potential weakness in the frequencies falling in-between the sensitivity spectrums of the three dyes.
3) Spectral Dye-Density Curves: This chart is somehow the summary of all the curves on the two charts above, and it shows how opaque the film becomes throughout the visible spectrum in response to exposure to a neutral gray subject. The key here, IMHO, is the variation from baseline (minimum density). Higher variation means higher saturation and lower variation means subdued colors and muteness. What I understand from the chart above is that Portra 160 renders limited "violets and high blues", rich "blues, blue-greens, and high greens", very muted yellowish greens and subdued pure yellows, pretty normal oranges and shiny reds, although a bit muted in lower reds (700nm is low and 400nm is high). In comparison, a cheap Fuji renders very saturated primary colors and a cheap Kodak renders subdued colors across the range.
4) MTF: This is a measure of optical distinction (acuity), and can be measured for different parts of an optical system, including the film and the lens. What you see on the MTF chart above is that the film has perfect rendering up to 20 lines per millimetre, where the MTF curves start dropping. This means that if we shoot a subject with adjacent horizontal dark and light lines at a distance where 20 of these lines get registered in only a millimetre (spatial frequency) of the film, the photosensitive emulsion is able to render these lines as perfectly separate, without any cross-bleeding on the edges between dark and light lines. At any spatial frequency above this, light will bleed from the lighter lines to the darker lines through the edges, and we won't have a complete separation. As you can see, the least accurate channel on this specific film is red, and it reaches only about 15% separation at 80 lines per millimetre: When you shoot with this film, you will only see the fine details if they emit blue or green colors.
Re 1) - I'd just add that the shape of those curves is important. a) It tells you how the film handles shadows and highlights.
|
STACK_EXCHANGE
|
With desktop GUI apps becoming generally becoming increasingly bloated and sluggish over time (largely thanks to growing popularity of Electron.js) some technical users are turning back to sofware using the simpler and leaner user interfaces - command line and pseudo-GUIs being rendered in ASCII/Unicode (text user interfaces). CLI tools have the additional benefit of being relatively easy to be used via subprocceses from automation scripts. We will take a look at some modern software that is meant be used via terminal.
Carbonyl: 21st Century Lynx
Carbonyl is a Chromium fork that is a modern take on TUI-based web browser idea. If you have Docker installed you can easily try it out:
$ docker run --rm -ti fathyb/carbonyl https://trickster.dev
It works quite well with simpler sites, but it can run into issues when you trip something up in anti-automation solutions. Another limitation is that you cannot watch videos with sound.
ag, ripgrep - the new grep(1)
You use grep(1) when you want to search something in source code or text file(s).
The Silver Search (
a tool similar to grep, but designed to search for substrings in source code
and optimised for that end.
Ripgrep is another grep-like tool to run fast searches for text in a directory tree (with regex support), while also respecting your .gitignore file.
bottom, btop, glances - better kind of top(1)
In Linux/Unix systems top(1) is a TUI tool that renders a list of running processes with their resource consumption numbers - similar to Activity Monitor in macOS or Task Manager in Windows. Many tech people are aware of htop - an alternative tool with enhanced interactivity.
But we can go even further. Bottom
btm) is a Rust program that renders resource consumption graphs in Unicode
pseudo-graphics with a process list.
Terminal Image Viewer (tiv)
But what if you want to take a quick glance at pictures without having a GUI environment? Perhaps you have some web scraping project that involves downloading images and it’s bit of pain to use scp(1) or sftp(1) to download them from the server in order to preview them. Terminal Image Viewer is a C++ program that relies on features of modern terminals for actually quite good image rendering.
Note however that this does not work on default Terminal app on macOS - you will need iTerm for this.
HTTPie - command line API client
HTTPie is a CLI tool designed specifically for launching requests against RESTful APIs. Unline curl, which is more general tool, HTTPie provides a simple CLI that directly maps into aspects of HTTP requests you want to generate:
$ http GET https://ifconfig.me/all.json
HTTP/1.1 200 OK
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Via: 1.1 google
content-type: application/json; charset=utf-8
date: Tue, 08 Aug 2023 12:17:07 GMT
strict-transport-security: max-age=2592000; includeSubDomains
"encoding": "gzip, deflate, br",
"forwarded": "[REDACTED], 184.108.40.206,220.127.116.11",
"via": "1.1 google"
We can see it also pretty-prints the JSON payload in the response.
jq - CLI tool and DSL for JSON processing
If pretty-printing the JSON is not enough, you can use jq - a CLI tool that provides an entire Domain Specific Language for JSON wrangling.
tshark - CLI equivalent of Wireshark
Wireshark is a prominent packet sniffer used by network engineers, security people and software developers. TShark is a relatively obscure sibling of Wireshark that is CLI-based. Think of it as a tcpdump(1), but better. It can be used to not only capture network traffic, but to run various filters and statistical techniques for traffic analysis purposes.
But what if you want something like Wireshark GUI in your terminal environment? TermShark is a TUI wrapper for TShark that renders a user interface similar to that of Wireshark.
When doing exploration of web and mobile apps for security research or automation one may want to intercept API traffic to see what conversations between the app and remote servers. mitmproxy is an interactive TUI program that works as HTTP(S) proxy server meant to capture and inspect the HTTP(S) requests. It’s a simpler alternative to Burp Suite GUI app that you can also customise with your own plugin code.
Neovim is a modernised and more extensible addition to the vi/VIM text editor lineage with features like Language Server Protocol support, Lua scripting interface, improved usability, RPC API, system clipboard integration, built-in terminal emulator and so on.
GitHub CLI tool
There’s git(1), and there’s GitHub. The former is CLI tool and the latter is web-based platform. To bridge the gap between the two there’s a Github CLI tool for doing tasks like working with tickets, managing pull requests and configuring the CI system. This can be helpful when doing devops work to automate various aspects of software development lifecycle.
But what if you want to see some maps in your terminal? You can do that with mapscii - a Node.JS program that renders OpenStreetMap data in terminal environment with mouse support. If you don’t want to install it via NPM you can access a test instance via telnet protocol:
$ telnet mapscii.me
In scraping and automation it is very common to deal with tabular data in formats such as CSV files, Excel spreadsheets, SQLite databases and so on. Visidata is TUI application for doing exactly that. It does a good subset of what Excel can do - data filtering, rendering some charts (for example, Shift-F quickly renders a histogram of values in the current column), lettting you edit cells and so on.
Since Visidata is developed in Python you can install it via PIP. It is also available on packet managers of some Linux systems.
Radare2 is a programmable binary reverse engineering
toolkit for low-level hackers. Besides the main TUI program it also ships
r2pipe - a Python module for scripting binary analysis workflows (disassembly,
binary code analysis, runtime debugging).
Rich and Textual
These are all nice examples, but what if you want to develop something like this yourself? Writing ncurses-based C programs is not the most productive way to spend time in 2023. When it comes to Python CLI/TUI programming, there are two inter-related projects of interest:
- Rich - a library for enhanced text formatting.
- Textual - a TUI programming framework, based on Rich (with CSS support).
To get a glimpse of what it can do, you can run:
$ python3 -m textual
|
OPCFW_CODE
|
package org.datagr4m.maths.geometry;
import java.awt.geom.Point2D;
import java.awt.geom.Rectangle2D;
import org.jzy3d.maths.Coord2d;
public class RectangleUtils {
public static boolean DEFAULT_BORDER_CONSIDERATION = false;
public static Rectangle2D scale(Rectangle2D r, double widthChange, double heightChange){
return build(r.getCenterX(), r.getCenterY(), r.getWidth()+widthChange, r.getHeight()+heightChange);
}
public static Rectangle2D build(double centerx, double centery, double width, double height){
return new Rectangle2D.Double(centerx-width/2, centery-height/2, width, height);
}
public static Rectangle2D build(Coord2d center, double width, double height){
return new Rectangle2D.Double(center.x-width/2, center.y-height/2, width, height);
}
public static Rectangle2D build(Point2D topLeft, Point2D bottomRight){
return new Rectangle2D.Double(topLeft.getX(), topLeft.getY(), bottomRight.getX()-topLeft.getX(), bottomRight.getY()-topLeft.getY());
}
public static Point2D topLeft(Rectangle2D rectangle){
return new Point2D.Double(rectangle.getMinX(), rectangle.getMaxY());
}
public static Point2D bottomRight(Rectangle2D rectangle){
return new Point2D.Double(rectangle.getMaxX(), rectangle.getMinY());
}
public static Point2D center(Rectangle2D rectangle){
return new Point2D.Double(rectangle.getCenterX(), rectangle.getCenterY());
}
/** Return true if the interior of r1 intersect interior of r2. */
public static boolean intersects(Rectangle2D r1, Rectangle2D r2){
return r1.intersects(r2.getMinX(), r2.getMinY(), r2.getWidth(), r2.getHeight());
}
public static boolean contains(Rectangle2D r, Point2D point){
return contains(r, point, !DEFAULT_BORDER_CONSIDERATION);
}
public static boolean contains(Rectangle2D r, Point2D point, boolean ignoreBorder){
if(ignoreBorder){
if(point.getX()<=r.getMinX())
return false;
else if(point.getX()>=r.getMaxX())
return false;
else if(point.getY()<=r.getMinY())
return false;
else if(point.getY()>=r.getMaxY())
return false;
return true;
}
else{
if(point.getX()<r.getMinX())
return false;
else if(point.getX()>r.getMaxX())
return false;
else if(point.getY()<r.getMinY())
return false;
else if(point.getY()>r.getMaxY())
return false;
return true;
}
}
public static boolean contains(Rectangle2D r, Coord2d point){
return contains(r, point, !DEFAULT_BORDER_CONSIDERATION);
}
public static boolean contains(Rectangle2D r, Coord2d point, boolean ignoreBorder){
if(point==null || r==null)
return false;
if(ignoreBorder){
if(point.x<=r.getMinX())
return false;
else if(point.x>=r.getMaxX())
return false;
else if(point.y<=r.getMinY())
return false;
else if(point.y>=r.getMaxY())
return false;
return true;
}
else{
if(point.x<r.getMinX())
return false;
else if(point.x>r.getMaxX())
return false;
else if(point.y<r.getMinY())
return false;
else if(point.y>r.getMaxY())
return false;
return true;
}
}
public static boolean contains(Point2D rectCenter, float rectWidth, float rectHeight, Point2D point){
return contains(rectCenter, rectWidth, rectHeight, point.getX(), point.getY(), !DEFAULT_BORDER_CONSIDERATION);
}
public static boolean contains(Point2D rectCenter, float rectWidth, float rectHeight, Point2D point, boolean ignoreBorder){
return contains(rectCenter, rectWidth, rectHeight, point.getX(), point.getY(), ignoreBorder);
}
public static boolean contains(Point2D rectCenter, float rectWidth, float rectHeight, double x, double y, boolean ignoreBorder){
double rminx = rectCenter.getX() - rectWidth/2;
double rmaxx = rectCenter.getX() + rectWidth/2;
double rminy = rectCenter.getY() - rectHeight/2;
double rmaxy = rectCenter.getY() + rectHeight/2;
if(ignoreBorder){
if(x<=rminx)
return false;
else if(x>=rmaxx)
return false;
else if(y<=rminy)
return false;
else if(y>=rmaxy)
return false;
return true;
}
else{
if(x<rminx)
return false;
else if(x>rmaxx)
return false;
else if(y<rminy)
return false;
else if(y>rmaxy)
return false;
return true;
}
}
public static boolean contains(Point2D p1, Point2D p2, float width, double x, double y){
return contains(p1, p2, width, x, y, !DEFAULT_BORDER_CONSIDERATION);
}
/** Check intersection for a rectangle described as a tube made of two extremities and a width.*/
public static boolean contains(Point2D p1, Point2D p2, float width, double x, double y, boolean ignoreBorder){
double rminx = 0;
double rmaxx = 0;
double rminy = 0;
double rmaxy = 0;
if(PointUtils.areHorizontal(p1, p2)){
rminx = Math.min(p1.getX(), p2.getX());
rmaxx = Math.max(p1.getX(), p2.getX());
rminy = p1.getY() - width/2;
rmaxy = p1.getY() + width/2;
}
else if(PointUtils.areVertical(p1, p2)){
rminx = p1.getX() - width/2;
rmaxx = p1.getX() + width/2;
rminy = Math.min(p1.getY(), p2.getY());
rmaxy = Math.max(p1.getY(), p2.getY());
}
else{
return LineUtils.inTube(p1, p2, width, x, y);
}
if(ignoreBorder){
if(x<=rminx)
return false;
else if(x>=rmaxx)
return false;
else if(y<=rminy)
return false;
else if(y>=rmaxy)
return false;
return true;
}
else{
if(x<rminx)
return false;
else if(x>rmaxx)
return false;
else if(y<rminy)
return false;
else if(y>rmaxy)
return false;
return true;
}
}
}
|
STACK_EDU
|
package hageldave.jplotter.util;
import java.util.HashMap;
import java.util.function.Supplier;
import hageldave.jplotter.canvas.FBOCanvas;
import hageldave.jplotter.gl.Shader;
import hageldave.jplotter.util.Annotations.GLContextRequired;
/**
* The ShaderRegistry class is a statically accessed class for keeping track of {@link Shader}s.
* To avoid the creation of duplicate shaders in the same GL context, this class can be used to
* easily allocate or get shaders shared by different objects.
* <p>
* Shaders are identified by context (canvasID) and a label, and they are obtained through the
* {@link #getOrCreateShader(String, Supplier)} method.
* When the shader is no longer in use by the object it has to be handed back to this class
* through {@link #handbackShader(Shader)} which will close it if no longer in use by any other object.
* <p>
* Each shader in the registry is reference counted to determine if a registered shader is in use or not.
* {@link #getOrCreateShader(String, Supplier)} increments the reference count, {@link #handbackShader(Shader)}
* decrements the reference count.
*
* @author hageldave
*/
public final class ShaderRegistry {
private static final HashMap<Integer,HashMap<String, Pair<Shader,int[]>>> context2label2shader = new HashMap<>();
private static final HashMap<Shader, Pair<Integer, String>> shader2contextAndLabel = new HashMap<>();
private ShaderRegistry(){/* statically accessed singleton */}
/**
* Returns the desired shader.
* If a Shader with the provided label in the current GL context is already registered,
* it will be returned and its reference count incremented.
* Otherwise a new shader will be allocated using the specified supplier and registered.
*
* @param label of the shader
* @param shadermaker supplier (constructor/factory) of the shader in case it was not yet registered.
* @return shader corresponding to specified label and current context.
*
* @throws IllegalStateException when no context is active (FBOCanvas.CURRENTLY_ACTIVE_CANVAS == 0)
*/
@GLContextRequired
public static Shader getOrCreateShader(String label, Supplier<Shader> shadermaker){
int canvasid = FBOCanvas.CURRENTLY_ACTIVE_CANVAS;
if(canvasid == 0){
throw new IllegalStateException(
"No active FBOCanvas, the FBOCanvas.CURRENTLY_ACTIVE_CANVAS field was 0. " +
"This indicates that there is likely no active GL context to execute GL methods in."
);
}
HashMap<String, Pair<Shader,int[]>> label2shader = context2label2shader.get(canvasid);
if(label2shader == null){
label2shader = new HashMap<>();
context2label2shader.put(canvasid, label2shader);
}
Pair<Shader, int[]> shaderref = label2shader.get(label);
if(shaderref == null){
shaderref = Pair.of(shadermaker.get(), new int[1]);
label2shader.put(label, shaderref);
shader2contextAndLabel.put(shaderref.first, Pair.of(canvasid, label));
}
// increment ref count
shaderref.second[0]++;
return shaderref.first;
}
/**
* Hands back the specified shader, signaling it is no longer in use by the caller.
* This decrements the reference count of the specified shader in the registry.
* When the reference count drops to 0, the shader is closed (destroyed).
* @param shader to be handed back.
*/
@GLContextRequired
public static void handbackShader(Shader shader){
int canvasid = FBOCanvas.CURRENTLY_ACTIVE_CANVAS;
if(canvasid == 0){
throw new IllegalStateException(
"No active FBOCanvas, the FBOCanvas.CURRENTLY_ACTIVE_CANVAS field was 0. " +
"This indicates that there is likely no active GL context to execute GL methods in."
);
}
Pair<Integer, String> pair = shader2contextAndLabel.get(shader);
if(pair == null)
return;
if(pair.first != canvasid){
throw new IllegalStateException(
"Canvas ID of shader and currently active canvas dont match." +
"This means that the wrong context is active to delete the shader."
);
}
HashMap<String, Pair<Shader, int[]>> label2shader = context2label2shader.get(pair.first);
Pair<Shader, int[]> shaderref = label2shader.get(pair.second);
if((--shaderref.second[0]) == 0){
// destroy the shader
shaderref.first.close();
label2shader.remove(pair.second);
shader2contextAndLabel.remove(shaderref.first);
}
}
}
|
STACK_EDU
|
What might be an appropriate term for a long-term, very serious, girlfriend?
In the USA, where I live, it is becoming increasingly common that men and women are making committed relationship decisions, but choosing to remain unmarried. However, they live together, raise children together, and otherwise appear married. They are simply not legally married and they are okay with it and so are most other Americans. It is obvious that they are highly involved with each other and the depth of their relationship is akin to a long standing marriage.
I am having trouble determining what to call men and women in this kind of relationship, relative to the other. I might say "my friend's girlfriend," however, I would also use this same term for whatever the relationship of two 14-year-olds is. It seems to me that my friend and his relationship with his girlfriend deserves higher recognition.
Lately, I have resorted to calling these women ladies. I might ask my friend "How is your lady doing," or say about a party "Bring your ladies." I like this because the definition of lady implies a high 'social' status and removes any negative connotation that the women are morally devoid because they are not married, yet live with a man. Oddly, someone told me recently that it sounded sexist when I referred to a few friends and "their ladies." I don't even really know how to approach that.
It wasn't until just now that I have no ideas at all for men. Just "lady" for women.
possible duplicate of Word for partner you are living with but not married to. Also see: http://english.stackexchange.com/questions/76006/is-there-a-more-concise-term-for-a-long-term-girlfriend-boyfriend-than-signific
@coleopterist But I want a word that is more than 'concise' and 'inclusive'. I want to convey that because these persons are special to my friends they are also special to me because I liken their relationship to my marriage. All inclusive is just too annoying and takes passion out of words.
You are welcome to edit your question and detail why none of the answers in the other questions are suitable in your case. As it stands, your question is currently a duplicate of at least one of those two. It's also possible that a suitable word or term that satisfies all your criteria is simply not available.
'Ladies' is considered a bit uncool nowadays, a little cringeworthy.
In the UK at least, calling someone a lady is fine. It's your description of them as 'your ladies' that may be a poor choice, whereas "are the ladies coming?" would be perfectly acceptable.
Specific to the urban African-American subculture, a long-term girlfriend is called your "wifey" (as opposed to "wife").
Too long for a comment, but not necessarily meant as an answer, since it's very geographical, so be easy on the downvotes:
Just a few days ago I discussed this concept with a US native speaker. I live in New Zealand, where we call the "significant other" just a "partner". It may well be your married spouse, your long or short term boy-/girl-friend, or your same-sex relationship other half. Male, female, doesn't matter. It's the person you share your life with. Married or not, legally binding or not. Who cares. Invitations state to "bring your partner", which extends to whoever is your significant other.
To the US native I was discussing this with, though, the word "partner" seemed to be heavily leaning towards a homosexual relationship. (But then, they felt that "pot plant" was growing marijuana :-))) In NZ it definitely is NOT (either marijuana or gay) . It's just a "politically correct" way to name your, ehrm, partner. And you can present the host of the function with a pot plant without any legal consequences.
Ah, to live on an island in the South Pacific.
Long story short: don't know about the US convention, but in New Zealand English, "partner" covers all aspects of relationships and can safely be used in any context to refer to the person you (currently) share your life with.
Thank you for the response. The US native's reaction to "partner" and "pot plant" are extremely common. Google pot plant and try not to see the marijuana pictures. I personally do not like partner because that is already taken in terms of business, dancing, music accompaniment, and many others involving action only. Relationships are not an action; they are something you live. It is too widely used already and its definition is too restrictive, imo.
Fair enough. I wasn't expecting to deliver the perfect answer, but wanted to share how this topic is handled in New Zealand. I'm aware that other countries and cultures will have differing connotations for "partner" (and "pot plant").
In the US the plant would be a 'potted plant'
Partner: it works for male or female members of a relationship and has been the correct term in the western US for a decade or so and is generally accepted throughout the rest of the US.
Lady: sounds rather old-fashioned and rather sexist.
Can you expand on why Lady sounds sexist to you? That is actually why I asked in the first place because someone else said the same thing.
In the UK, we usually just say something like "how's the misses?" or "how's your misses", regardless of marital status. We also often call partners the other half, regardless of marital status or gender, as in "are you bringing the other half?" The term your better half is often used jokingly.
There is a word that sums it all up: concubinage.
Wikipedia:
Concubinage is an interpersonal relationship in which a person engages in an ongoing relationship (usually matrimonially oriented) with another person to whom they are not or cannot be married. The inability to marry may be due to differences in social rank (including slave status), or because the man is already married.
Yet Oxford Dictionaries say that a concubine is “a woman who lives with a man but has lower status than his wife or wives”.
Where I live (Romania) this term has bad connotations appended to it. It's used (in the news mostly) only when bad stuff happens. Like:
A man strangled his concubine after disagreement on who deserves to win on Dancing with the Stars.
This isn't too much help but it would be a good sociological experiment to see how people react if you refer to them as concubines.
Yeah. I think I would have just about the same reaction as Bi*** or Ho. That is far to dangerous an experiment for me.
Haha. You're probably right. But this is the word that captures that type of relationship. It's people's fault that it's not accepted. Offense is always taken, not given. :)
|
STACK_EXCHANGE
|
I have a laptop dedicated to my x-carve, and I use the touch pad on the laptop as the mouse, which often takes a single mouse click as multiple mouse clicks.
Since the button for confirming the current work position is in the same location as the previous material is secure button, this means that fairly frequently I end up confirming the position when I wanted to use the last home position.
Then I have to re-zero the Z, which is not just extra work, but also less accurate.
All that would have to happen is for those buttons to be moved to a different location than the previous buttons, that way the inadvertent mouse press wouldn’t do anything.
I sort of hate trackpads in general, but I don’t really have a place to put a mouse.
I’ve already changed the settings so that only buttons will send a click, but it still double taps. I think I’ve adjusted the delay, but I’ll look again.
Regardless, occasionally I’ll click through too fast, since you will lose the previous work position if you click the wrong button, you really want the UI design to encourage the user to have to think (at least a little) in this particular case.
I’m sorry, but I don’t agree that your particular hardware issues are a global UX/UI problem.
I also like to believe that when I’m dealing with an automated power tool that could seriously injure me without skipping a beat, that I take the time and go slowly through the setup process. Best practices and whatnot.
It seems like you’re having issues with your computer and asking Inventables to rewrite their interface to accommodate you, then coming up with a safety-based use case to justify it.
It’s hardly a hardware “problem” that is unique to me or my setup, many of the track pads used in laptops exhibit a certain behavior.
Look, this is part of what I’ve been doing for a living for over 30 years, so it’s something that I know about.
I suspect (based on the wording of your response) that you do too.
There is a general school of thought about UI design that says don’t make the user think, and I generally agree with that.
However, in my opinion there is a valid exception to that rule which says that if the thing they are about to do has an irreversible effect, you should try to make them think, at least a bit.
I don’t really want to argue about it, and perhaps I’m overreacting to your response, but I made a request, Inventables is free to act on it, or ignore it, as they see fit.
OK, no problem, like I said, maybe I read too much into it.
Anyway, there are a couple of different use cases about why I think it would be a generally good thing for the series of carve steps to not have all the “OK” buttons in the same places, only one of which has to do with trackpads.
When I can get to the machine again (possibly next week), I’ll try again to make the trackpad/button settings useful, but I’ll probably also try tampermonkey to move the buttons.
Sometimes my fingers hit the OK button before my brain has a chance to scream NO!
|
OPCFW_CODE
|
Venting Dryer to Outside Through Foundation Wall OR Vent?
I'm relocating and replacing the ducting to vent my electric dryer that is located in the middle of my home. I'm using 4" galvanized duct, I've installed a Dryer Box on the back (2x6) wall of my laundry closet in a downward venting position. The vent pipe goes down through the subfloor from the "Dryer Box" and immediately makes a 90-degree turn (elbow) back towards the dryer running in between and parallel with my floor joists. I'm hanging the pipe using galvanized strapping between the joists close to the subfloor (between the subfloor and rolled fiberglass insulation) with a very slight downward angle (1/4" per 5'). As a straight shot, it is 20' to the exterior wall from the elbow.
I've brought the pipe halfway to the exterior wall in a straight run from the elbow. I'm considering two options from here. I was hoping to get opinions on which one is better.
Things to note; it's a single-level home with 8" poured concrete foundation walls. It uses 9.5" i-joists which are hung off the foundation wall (no rim joist)
First option: Continue the straight run and vent directly through the exterior foundation wall using a 4" core drill to make an exit hole. This will place the hole about 2-3" from the top of the concrete. I can't imagine it will affect the structural integrity of the foundation wall but I am concerned about causing future cracking and the potential for blowing out the top (the foundation is 20 years old).
Second option: There is a foundation vent about 40" to the left, two joist bays over from where my pipe is now (which is where the builders had terminated the dryer duct originally using semi-rigid). From the end of my current straight run, I would have to do a 90-degree elbow to the left, drill 4" holes through 2 I-joists and then another 90-degree elbow followed by a straight run, terminating through the foundation vent.
I've checked product literature for the i-joists and it says making 4" holes through the middle of them is acceptable. I'm concerned about making a hole in the concrete but having fewer elbows seems better (not to mention going through a foundation vent looks kind of hokey). I've considered using a product called the "dryer-ell" (made by the same company the makes the dryer box) that claims to not restrict airflow like a normal elbow, however, it still has 2 extra turns and it seems like it would make for easier cleaning later on, keeping it as a straight run.
Any advice would be appreciated.
Just because that's where it was doesn't mean that's where it should go. I'll make a hole in a rim joist every time before I have to core a 4" hole through concrete. "two joist bays over from where my pipe is now" - the bay it's in now, cut the ceiling open and hole saw the rim joist; two elbows and you're done (if it's actually inside the bay, you don't need any elbows)
For option 1, if the hole is above the concrete, what are you drilling through? Is it the band joist/rim joist which is wood?
@achoa - the hole would not be above the concrete. The pipe is running in between the joists which are hung (using joist hangers) off the sill plate going down the foundation wall. The i-joist is 9.5" inches tall, sill plate (flat 2x8) is 1.5" leaving 8" of the joists butting up against my foundation wall.. When I started this project I assumed my floor joists sat on top of the foundation wall...unfortunately not the case, otherwise, you'd be correct and I would just drill right through the rim board.
@mazura - sorry this is a little hard to explain, wish I'd taken a picture when I was down there. No ceiling to cut open, it's single level, I'm working in the crawlspace. Also no rim joist. please see my above comment to achoa. Basically, it's a core drill through a foundation wall or hole saw through two joists and duct it out a foundation vent. either way, it comes out at the same height outside the house (about 10 inches off the ground)
I fully agree with @mazura going through the rim joist or whatever is sitting on top of the foundation is 100% better than trying to core drill the foundation or going another 40’ and more elbows to a foundation vent.
I see. While i generally don’t like fighting gravity, there’s certainly dryer duct installations that go through attics and rooftops, so would it be so bad to go slightly elevated and drill through rim joist from the straight shot approach?
|
STACK_EXCHANGE
|
Remove duplicate links from my navigation Breadcrumb inside my Enterprise wiki site collection
I am working on an Enterprise wiki site collection inside my sharepoint farm 2013 on-premises.
Now i am following this appraoch to show a Breadcrumb navigation link for my site collection Link. so i mainly added this code inside my master page under the <div id="suiteBarRight">, as follow:-
<div id="suiteBarRight">
<asp:ContentPlaceHolder id="customBreadcrumb" runat="server">
<asp:SiteMapPath runat="server" SiteMapProvider="SPContentMapProvider" RenderCurrentNodeAsLink="false" NodeStyle-
CssClass="breadcrumbNode" CurrentNodeStyle-CssClass="breadcrumbCurrentNode" RootNodeStyle-CssClass="breadcrumbRootNode"
HideInteriorRootNodes="true" SkipLinkText=""/>
</asp:ContentPlaceHolder>
Now this worked for almost all the libraries and pages inside my enterprise wiki. which sound promising as it worked on the whole pages and susites. but i have noted this issue, if i access a wiki page using its friendly-URL, where i will get this breadcrumb links:-
Business continuity Plan > BuisnesscontinuityPan > Home
here is a screenshot of the breadcrumb links:-
now the duplicate entry is the second link labeled BuisnesscontinuityPlan, where my site collection URL is /sites/BuisnesscontinuityPlan. now if i click on the first link labeled Business continuity Plani will be redirected to the site home page which is fine. but if i clcik on the second link BuisnesscontinuityPlan i will be redirected to the Pages library. so now i want to remove the duplicate second link labeled BuisnesscontinuityPlan. so can anyone adivce if this is possible ? i am thinking of writing a custom css to do so? mainly to remove the <a> link inside the breadcrumb if it have the following label BuisnesscontinuityPlan and i need to remove the preceding ">" character as well.so can anyone advice on this please?
If you are trying to hide the element by just checking its text, then this jQuery code should work. I haven't tested it yet.
var $duplicateItem = $('#suitBarRight > span > span:nth-child(3)')
var duplicateText = $duplicateItem.text();
if (duplicateText == "BuisnesscontinuityPlan") {
$duplicateItem.hide(); // this will hide element with text BuisnesscontinuityPlan
$duplicateItem.next().hide(); // this will hide next ">" element
}
This code will just hide the element if text matches exactly to BuisnesscontinuityPlan and it must be third element in the DOM as shown on your screen.
|
STACK_EXCHANGE
|
using System.Collections.Generic;
using System.Linq;
using Microsoft.CodeAnalysis;
namespace MixinRefactoring
{
/// <summary>
/// the mixers includes a mixin into a target (the mixin child)
/// by adding an implementation of every mixin member
/// to the mixin child
/// </summary>
public class Mixer
{
private readonly MemberComparer _memberCompare = new MemberComparer();
private readonly List<Member> _membersToImplement = new List<Member>();
public void IncludeMixinInChild(MixinReference mixin, ClassWithSourceCode child)
{
var childMembers = child.MembersFromThisAndBase;
var mixinMembers = mixin.Class.MembersFromThisAndBase;
foreach(var mixinMember in mixinMembers)
{
var membersWithSameSignatureInChild = childMembers
.Where(x => _memberCompare.IsSameAs(x, mixinMember))
.ToList();
// 1. case: method does not exist in child => implement it
if (!membersWithSameSignatureInChild.Any())
_membersToImplement.Add(mixinMember.Clone());
else // 2. case: method does exist in child, but is abstract and not overridden => override it
{
// member is declared as abstract in a base class of child
// but not in child itself
var abstractMembers = membersWithSameSignatureInChild
.Where(x =>
x.IsAbstract &&
x.Class != child &&
!child.HasOverride(x));
_membersToImplement.AddRange(abstractMembers.Select(x => x.Clone(true)));
}
}
}
public IEnumerable<Member> MembersToImplement => _membersToImplement;
public IEnumerable<Property> PropertiesToImplement => _membersToImplement.OfType<Property>();
public IEnumerable<Method> MethodsToImplement => _membersToImplement.OfType<Method>();
public IEnumerable<Event> EventsToImplement => _membersToImplement.OfType<Event>();
}
}
|
STACK_EDU
|
Instead of organizing people into "functions" we may take a cue from some of the greatest games, and organize around "quests". That may sound very designed, but what it means is ad hoc, problem centric organization in which people are allowed to organize themselves around what they perceive to be meaningful, important problems in need of solving.
- "Lock in" of talent may squelch passion, demoralize employees and of course lead to worse fit of talent to problems.
- Top down delegation of tasks utilizes fewer minds thinking about what problems are actually important, making the organization less adaptable, less able to correctly identify dangers or opportunities.
- Utilize the power of the many by, to some degree at least, letting everyone create "quests" for themselves and others. A quest is a description of a problem that needs solving, together with a reason for why it is important.
- Make quests visible to as many people as possible. Default: whole organization. Possibly even more visible, to customers or the public, where not inappropriate. Visibility enables serendipity. The point is to facilitate the match of person and task. For the right people to be able to self-select to the right tasks, the quests need to be visible. Depending on the size of the organization, this may mean different things, but for example you could keep a quest board on the intranet. Draw inspiration from Open Space technology. You need a grid, with the "what needs doing" (quests) visible to all. Then, whoever comes is the right people. Maybe some things turn out to not be that important after all. Or people agree it is important but nobody wants to do it - that would be yet another quest to solve. Perhaps outsource that task.
- Make people visible the same way. Of course, one of the best ways is if people actually know each other and have met face to face. Other than that, intranet profiles with skills, interests and current and completed quests may be helpful.
- Allowing people to "go" where the organization resonates the most with them enables what Lynda Gratton calls "hot spots" and makes sure everyone ends up, for the time being, exactly where they need to be - ie no under-utilization of peoples passions and skills!
- Roles are quest specific, which still allows people to be very specialized if they want - taking similar roles in all quests they participate in - or more generalized, synthesizing knowledge acquired from playing many different roles in different quests.
- From an employee point of view, being organized in a "function" or department simply isn't all that fun. I remember, as an employee, several times hearing about something they did in another department that I got excited about, thought I could contribute greatly to but had no way of helping out on while still doing "my job". Letting me decide what quests I embark on changes that.
- From a management point of view, a big reason function lock-in is a problem is because it squelches passion. You already have bright & talented people (right?) and if there is a place in your organization that resonates really strongly with them, you should let them go there. You need them to go there. That is when the magic happens.
- Essentially the point to create a kind of market for ideas, tasks & people inside the organization. The "currency" is the time and effort that people are willing to give to a particular cause, or quest.
By letting people self-select to quests that resonate most with them, and design their own quests, we allow both for passion-driven development of individuals and aggregate the wisdom of the crowd as people have the independence to use their judgment about what is meaningful and where they will have impact.
|
OPCFW_CODE
|
[SPARK-11077] [SQL] Join elimination in Catalyst
Join elimination is a query optimization where certain joins can be eliminated when followed by projections that only keep columns from one side of the join, and when certain columns are known to be unique or foreign keys. This can be very useful for queries involving views and machine-generated queries.
This PR adds join elimination by (1) supporting unique and foreign key hints in logical plans, (2) adding methods in the DataFrame API to let users provide these hints, and (3) adding an optimizer rule that eliminates unique key outer joins and referential integrity joins when followed by an appropriate projection.
This change is described in detail here: https://docs.google.com/document/d/1-YgQSQywHfAo4PhAT-zOOkFZtVcju99h3dYQq-i9GWQ/edit?usp=sharing
Merged build triggered.
Merged build started.
Test build #43619 has started for PR 9089 at commit 578797c.
Test build #43619 has finished for PR 9089 at commit 578797c.
This patch fails Scala style tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
case class KeyHint(newKeys: Seq[Key], child: LogicalPlan) extends UnaryNode
sealed abstract class Key
case class UniqueKey(attr: Attribute) extends Key
case class ForeignKey(
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/43619/
Test FAILed.
Merged build triggered.
Merged build started.
Test build #43620 has started for PR 9089 at commit 7c7357b.
Test build #43620 has finished for PR 9089 at commit 7c7357b.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
case class KeyHint(newKeys: Seq[Key], child: LogicalPlan) extends UnaryNode
sealed abstract class Key
case class UniqueKey(attr: Attribute) extends Key
case class ForeignKey(
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/43620/
Test PASSed.
Merged build finished. Test PASSed.
@marmbrus I addressed your comments from the review about a month ago:
Foreign key references now store the referenced relation directly as a logical plan rather than requiring a catalog lookup.
We now use semanticEquals and AttributeSet for attributes instead of normal equality.
There were a few comments that didn't make sense on second thought:
Move the attribute equivalence check in ForeignKeyFinder to a method on LogicalPlan. We thought this would simplify the logic, but it turned out not to (still need to maintain the disjoint-set data structure, and the logic gets split between LogicalPlan and Project).
Move foreign key attribute resolution to its own rule that runs at the end of analysis. This would work fine, but it seems to fit well within ResolveReferences.
Finally, the new DataFrame methods should probably be marked as alpha somehow, but I'm not sure of the best way. Maybe a new ScalaDoc group?
cc @rxin, @jkbradley
We can tag them as Experimental (even though the entire DataFrame API is experimental!)
Merged build triggered.
Merged build started.
@rxin Thanks, I added the Experimental tags.
Test build #43633 has started for PR 9089 at commit 55bb135.
Test build #43633 has finished for PR 9089 at commit 55bb135.
This patch fails MiMa tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
case class KeyHint(newKeys: Seq[Key], child: LogicalPlan) extends UnaryNode
sealed abstract class Key
case class UniqueKey(attr: Attribute) extends Key
case class ForeignKey(
Merged build finished. Test FAILed.
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/43633/
Test FAILed.
Jenkins, retest this please.
Merged build triggered.
Merged build started.
Test build #43638 has started for PR 9089 at commit 55bb135.
Test build #43638 has finished for PR 9089 at commit 55bb135.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
case class KeyHint(newKeys: Seq[Key], child: LogicalPlan) extends UnaryNode
sealed abstract class Key
case class UniqueKey(attr: Attribute) extends Key
case class ForeignKey(
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/43638/
Test PASSed.
Calling uniqueKey on a DataFrame throws out the column names. Is that intended?
Merged build triggered.
Merged build started.
@jkbradley Oops, thanks for catching that. I introduced it in 50717599f1eb5bf2184a6b1df2e0aebabdebddec because I misunderstood the function of transformExpressionsDown. Should be fixed now.
Test build #43757 has started for PR 9089 at commit e1ec23d.
Merged build triggered.
Merged build started.
Test build #43758 has started for PR 9089 at commit 0cd8a91.
Test build #43757 has finished for PR 9089 at commit e1ec23d.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
case class KeyHint(newKeys: Seq[Key], child: LogicalPlan) extends UnaryNode
sealed abstract class Key
case class UniqueKey(attr: Attribute) extends Key
case class ForeignKey(
Merged build finished. Test PASSed.
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/43757/
Test PASSed.
Test build #43758 has finished for PR 9089 at commit 0cd8a91.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):
case class KeyHint(newKeys: Seq[Key], child: LogicalPlan) extends UnaryNode
sealed abstract class Key
case class UniqueKey(attr: Attribute) extends Key
case class ForeignKey(
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/43758/
Test PASSed.
Merged build finished. Test PASSed.
@ankurdave Np, thanks for the fix. Btw, should the fix be accompanied by a unit test to catch that issue?
Build triggered. sha1 is merged.
Build started sha1 is merged.
Test build #45232 has started for PR 9089 at commit 5abceae.
Test build #45232 has finished for PR 9089 at commit 5abceae.
This patch passes all tests.
This patch merges cleanly.
This patch adds the following public classes (experimental):\n * case class KeyHint(newKeys: Seq[Key], child: LogicalPlan) extends UnaryNode \n * sealed abstract class Key \n * case class UniqueKey(attr: Attribute) extends Key \n * case class ForeignKey(\n
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/45232/
Build finished. 5912 tests run, 0 skipped, 0 failed.
Thanks for the pull request. I'm going through a list of pull requests to cut them down since the sheer number is breaking some of the tooling we have. Due to lack of activity on this pull request, I'm going to push a commit to close it. Feel free to reopen it or create a new one.
|
GITHUB_ARCHIVE
|
Upgrade to Fedora 13 from Fedora 12 with Preupgrade - Comment Page: 1
I think the most easiest way to upgrade Fedora 12 to Fedora 13 is use Preupgrade, which download needed packages from the server and then just reboot to installer and after install boot the new system. This guide I use preupgrade-cli version, which works from command line. If you want use Preupgrade Graphical version then you can check this Howto upgrade Fedora 11 to Fedora 12 with Preupgrade guide, which works same way with Fedora 13 (Goddard). It's important backup your important files before upgrading. Upgrade to Fedora 13 from Fedora 12 with Preupgrade 1. Change to root su - ## OR ## sudo -i 2. Update Fedora 12 System packages yum update 3. Install preupgrade (just...
3 comments on “Upgrade to Fedora 13 from Fedora 12 with Preupgrade - Comment Page: 1”
Ran the preupgrade and no problem but when I rebooted it didn’t go into installer. I previously upgraded from 11 to 12 this way without problems. It says it can’t find the kickstart and goes into manual. When I point it to the image it says it does not exist. Ideas?
Is it possible that the Preupgrade boot image wasn’t downloaded? Example because there wasn’t enough space on boot partition?
Nothing irreversible has not yet occurred, I recommend following method to solve this problem.
Can you get the GRUB menu?
If not, then boot some live CD (to shell or desktop) and mount your boot partition (normally with following procedure)
Then edit grub/menu.lst file, comment out Preupgrade rows (something like following):
Then umount partition and reboot:
Then your system should boot back to your Fedora 12 normally.
Then I recommend try again Preupgrade method and check that the boot partition have enough free space for boot image (about 150 Mb should be enough, I guess). And when Preupgrade is finished then it’s also good to check that the kernel file /boot/upgrade/vmlinuz and initrd file /boot/upgrade/initrd.img really exist before next reboot.
Your page was very helpful.
For reasons that escape me, preupgrade made a typo in the grub entry required for the installation (an obvious missing space before the kickstart file).
However a more troublesome problem was that after freeing up space in /boot I still didn’t have enough–the install image is apparently 150 MB. (Both your notes and those in the fedora docs have smaller numbers.) And, unfortunately, the installer had trouble pulling in the full 150MB without a hiccup. Pulling down a copy of this install image and placing it on my local network turned out to be a relatively easy solution (not the first thing I tried…).
|
OPCFW_CODE
|
FR-5.2 doesn't fit in Shimano cassete
Using this video I determined that I needed the FR-5.2 tool. I.e. the tool fitting rotates with the sprockets, the rear derailleur is a Shimano Deore XT.
However, the tool doesn't quite seem to fit. The splines line up and I'm able to rotate the cassette, but it doesn't descent properly and I feel like I would strip something if I'd try to rotate it with more force. If I try to push the tool in with more force, nothing seems to happen. Is the tool supposed to fit this snugly?
The Bike is a 2019 YT Jeffsy
Am I using the right tool? Is there something I'm missing?
Hi, welcome to bicycles! You can check how deep those slots actually are with a nail or piece of wire and gauge how much engagement you're getting. It's possible they're a bit gunked up, but if you're getting 80% engagement you're probably fine. Because riding force winds the cassette on, it can be really tight.
Also, the torque spec is high to begin with. I used to have an adjustable wrench with my cassette tool, and it was hard because it would slip.
So the solution ended up being rather simple. The holes in the freehub were a bit dirty. I was able to get it in using a few very light hits with a hammer.
With the odd state the cassette is in, it's hard to discern everything happening here - it appears this is an XD Eagle cassette that's been somehow dismantled while the big cogs are still mounted.
Sometimes it happens that the spatial relationship between the drive-side hub endcap and the lockring doesn't come out quite as intended, and tools don't engage the splines properly. You're right to be cautious because stripping the lockring is bad news.
If you can get a little engagement but you're worried about it slipping out under force, you can often make it work by holding the tool on with something. Some examples could be threaded rod or a solid rear axle and nuts, or the bike's own thru axle and hanger (which you'd have to temporarily remove). (The FR 5.2 may not have a >12mm hole without modification though).
In some cases you can also solve this by getting the DS endcap out of the way completely if it's a slip-fit and isn't obstructed by the lockring, though often it will be.
It's possible for this to happen because the drive side endcap isn't engaged fully on to the axle somehow, so it's out further than intended and is obstructing the tool. How you'd address this depends on how the endcaps and/or tool fittings on your hub work, i.e. whether either or both are threaded.
If the tool is almost able to get in but not quite, there's the possibility of either modifying it or hunting for a thinner one, which do exist. Obviously it doesn't have a lot of material to give, but almost being able to get on anyway implies that even taking 0.02mm with sandpaper could be enough.
Even if the DS endcap can't be removed without getting the lockring off first, there may be options to give it wiggle room that may buy you a little more clearance. Again depending on the hub design/layout, this would be either removing the axle through the NDS side (probably requiring taking out and then replacing the NDS bearing) or removing the freehub body and endcap together as a unit.
|
STACK_EXCHANGE
|
As announced at Microsoft’s Connect(); event a couple of weeks ago, the company is working on several enhancements to their ALM stack. One of the changes announced is a major overhaul to their build system, currently referred to as Build v.Next. Although pieces of the new build functionality has been demoed on stage, we do not yet have access to the bits to try things out for ourselves.
With that said, what do we know today about Build v.Next?
DISCLAIMER: The details below have been gleaned from various Connect(); sessions and blog posts. The bits demoed are in an early, pre-release stage, and are subject to change before being released to the public. The screenshots shown below are sourced from Connect(); sessions.
One of the first things you’ll notice with Build v.Next is that you can create your build definitions completely within Visual Studio Online (VSO). In the image below, you can see there are currently five build templates available.
Much like TeamCity (a competing build product from JetBrains), in Build v.Next, you assemble a build definition by adding a sequence of tasks to be executed in sequence. You can see a subset of some of the build tasks that are currently available in the screenshot below.
The build tasks provided out of the box by Microsoft will be open sourced with plans to accept tasks from the community as well. There have been multiple attempts over the past several years, with varying success, to create a community around XAML/Workflow-build activities. However, these attempts have never seen the same type of success as seen by other communities, such as the one built up around Jenkins – which currently sports over 1,000 plugins!
Also new, will be the ability to view differences between versions of your build definitions:
Although you can currently view differences between versions of the underlying XAML that comprises a build definition in Team Foundation Server today (assuming you track your XAML in version control), it is not a good experience. The XML that makes up the XAML is constantly changing (e.g. as you move things around on the XAML design surface) which makes it very difficult to “diff” only what’s important between versions. Having the ability to view changes built directly into the build system will be a huge time saver.
In case I haven’t made it obvious to this point – Build v.Next DOES NOT UTILIZE XAML in your build definitions! However, all your existing XAML/Workflow-based builds will continue to run just as they do today. The two build systems can exist in harmony, side-by-side.
Once you start a build, you will now be able to view the build log in real-time directly from within VSO:
With Build v.Next, you will also be able to share all your build agents across team projects and team project collections (TPC). No longer will you need a dedicated controller for each TPC.
Adding to that, the new Build v.Next agents are cross platform compatible so you can run your builds on Windows-, Mac- or Linux-based machines (or a combination of all three). The cross-platform build agent is based on Node.js and, therefore, will run anywhere Node.js runs. This means that you will now be able to build your Java/Android, Mac or iOS apps using VSO.
One last bit of information that is known at this point… Build v.Next can also be used to build source code that lives outside of VSO. For example, in the Connect(); demos, GitHub is shown as being an available repository type. Currently, GitHub is the only non-VSO repository type shown but I can only assume there will be others.
Finally, Brian Harry stated, in his Connect(); keynote, that he hopes to have a public preview available early next year. I, for one, am definitely looking forward to these new features within VSO and TFS. How about you?
You can watch and read more here:
|
OPCFW_CODE
|
Update Gruneisen EOS
PR Summary
Refactors the Gruneisen EOS code to make it simpler and utilize the P0 parameter (it wasn't used before).
Also adds tests to cover the EOS and changes the documentation to better-reflect the code.
I ran the tests with the previous implementation and although the changes diffed, they were only in the last three decimal places of the pressure. The difference in the bulk modulus was larger, but expected since it's a derivative quantity.
NOTE: The temperature and the EOS in expansion aren't thermodynamically consistent. This will need to be addressed in the future.
PR Checklist
[x] Adds a test for any bugs fixed. Adds tests for new features.
[x] Format your changes by using the make format command after configuring with cmake.
[x] Document any new features, update documentation for changes made.
[x] Make sure the copyright notice on any files you modified is up to date.
@chadmeyer I'm sure you're sick of reviewing my EOS changes but I think you're still the best one to review here.
These changes were mainly necessary to make our implementation operable in Pagosa. Specifically, the P_0 parameter wasn't actually doing anything in our implementation but was used in Pagosa's. I've also decided to adopt the nomenclature of eostype(m) = 11 in xRAGE since it's neater and I think demonstrates the Mie-Gruneisen form better.
There were diffs but they were in the last two decimals for pressure. The bulk modulus differences were larger, but were expected since the bulk modulus approximately scales with the square root of the pressure.
I just worry that without GPU test coverage, it'll be easy to make a change that causes a segfault or something down the line.
Fair enough. I may just loop it then rather than checking the full vector functionality.
Aiming for full code coverage is always my goal, so I should have just not been lazy :-)
Looping should be fine, since it inherits from the full vector stuff, which we already know works.
@Yurlungur I made it run on GPUs and decided to just use the same strategy as the vector EOS test since it seemed the most similar to what I was already doing.
If there's an easier way to just copy one value at a time in a for loop, I'd be happy to change the code to do that. But at least it runs on the GPU now.
There's a way to do that @jhp-lanl using either deep copies or parReduce but this is fine. I don't think it's any cleaner than what you've done.
I would like to wait until @chadmeyer reviews before merging if possible.
It would seem that there is no ability in github to comment on unchanged lines, so I'm just going to quote the section:
PORTABLE_FUNCTION void Gruneisen::DensityEnergyFromPressureTemperature(
const Real press, const Real temp, Real *lambda, Real &rho, Real &sie) const {
sie = _Cv * (temp - _T0);
// We have a branch at rho0, so we need to decide, based on our pressure, whether we
// should be above or below rho0
Real Pref = PressureFromDensityInternalEnergy(_rho0, sie);
if (press < Pref) {
rho = (press + _C0 * _C0 * _rho0) / (_C0 * _C0 + _G0 * sie); <=== THIS LINE IS PROBABLY WRONG NOW!
} else { // We are in compression; iterate
auto residual = [&](const Real r) {
return press - PressureFromDensityInternalEnergy(r, sie);
};
Real rho1 = _rho0, res1 = residual(rho1), slope = _G0 * sie + _C0 * _C0, rho2, res2,
rhom, resm;
rho2 = (rho > rho1 + 1e-3) ? rho : rho1 + res1 / slope;
res2 = residual(rho2);
for (int i = 0; i < 20; ++i) {
slope = (rho2 - rho1) / (res2 - res1 + 1.0e-10);
rhom = rho2 - res2 * slope;
resm = residual(rhom);
if (resm / press < 1e-8) break;
rho1 = rho2;
res1 = res2;
rho2 = rhom;
res2 = resm;
}
rho = rhom;
}
}
I think the second half of the routine is OK because it iterates using functions that contain pressure and the _P0 parameter, but I think this particular branch will be wrong. @jhp-lanl, do you agree?
I think the second half of the routine is OK because it iterates using functions that contain pressure and the _P0 parameter, but I think this particular branch will be wrong. @jhp-lanl, do you agree?
@chadmeyer Good catch! Let me see what I can do here to reduce code duplication and make this consistent with my changes. I'll add a test as well.
I assume we're waiting on this until we resolve current questions downstream, @jhp-lanl @annapiegra ?
I assume we're waiting on this until we resolve current questions downstream, @jhp-lanl @annapiegra ?
We're waiting on a definitive formulation for the bulk modulus since it seems like the current formulations fail to reproduce the finite difference solution. We're also waiting on me since I've been busy with other things, but I'm going to be working on this tomorrow.
The last thing to resolve on this MR is the introduction of a maximum density for the EOS. I'll do this next week
@jonahm-LANL and @chadmeyer I've introduced the maximum density limit in the EOS now. It was made a bit complicated by the fact that we can have up to a cubic fit for the Hugoniot which could lead to multiple compression limits. Some of these limits may be excluded by other properties of the EOS, but instead of trying to run down all those limits, I figured it was easier to just naively assume that any fit could be given and that it could produce a compression limit for the EOS. If you guys can review the algorithm, I'm fairly certain (due to all the tests I've added) that the compression limit code is doing what it's supposed to.
@annapiegra and @mauneyc-LANL this will add the capability to introduce a maximum density limit if you want to merge it into your testing. I also have also added a few tests that make me fairly certain the pressure and bulk modulus are accurate in singularity: I have one test comparing the calculated bulk modulus to what you would get with finite differences, and I have another test comparing a state on the reference curve to what you would get from the Rankine-Hugoniot jump conditions. Both tests pass, which makes me fairly confident in the implementation we have here.
I think I've addressed everybody's comments at this point and all tests pass both here and on gitlab. Let me know by tomorrow if you have other concerns.
|
GITHUB_ARCHIVE
|
Request: add migration explanation from the old com.google.api.client.googleapis.auth.oauth2.GoogleCredential class
I've noticed the webpage of this repository doesn't explain how to leave the previous one :
https://googleapis.dev/java/google-api-client/latest/com/google/api/client/googleapis/auth/oauth2/GoogleCredential.html
Is your feature request related to a problem? Please describe.
Yes, I can't find the appropriate functions that match the previous ones. There is no documentation.
Describe the solution you'd like
To have such explanation.
Describe alternatives you've considered
Maybe write it on the old repository ?
I requested this here for the sample I've found, but it hasn't been updated for a very long time. I can't find a similar sample by Google
Additional context
The code I use (here) is very similar and based on what I've found of using the repository to reach the old API, here.
Would be very appreciated if you could tell me at least for these cases how to use them, or provide a nice sample that provides the same thing, of reaching People API.
Here's a more updated code of what I use:
@WorkerThread
public static Services setUp(@Nullable final String serverAuthCode) throws IOException {
final HttpTransport httpTransport = new NetHttpTransport();
final JacksonFactory jsonFactory = JacksonFactory.getDefaultInstance();
// Redirect URL for web based applications.
// Can be empty too.
final String redirectUrl = "urn:ietf:wg:oauth:2.0:oob";
// Exchange auth code for access token
final GoogleTokenResponse tokenResponse = new GoogleAuthorizationCodeTokenRequest(
httpTransport, jsonFactory, GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET, serverAuthCode, redirectUrl)
.execute();
// Then, create a GoogleCredential object using the tokens from GoogleTokenResponse
final GoogleCredential credential = new GoogleCredential.Builder()
.setClientSecrets(GOOGLE_CLIENT_ID, GOOGLE_CLIENT_SECRET)
.setTransport(httpTransport)
.setJsonFactory(jsonFactory)
.build();
final String accessToken = tokenResponse.getAccessToken();
credential.setFromTokenResponse(tokenResponse);
final PeopleService peopleService = new PeopleService.Builder(httpTransport, jsonFactory, credential)
.setApplicationName(getCurrentappPackageName())
.build();
final Services result = new Services();
result.peopleService = peopleService.people();
result.otherContactsService = peopleService.otherContacts();
result.contactGroups = peopleService.contactGroups();
final ContactsService contactsService = new ContactsService("contacts");
contactsService.setOAuth2Credentials(credential);
result.contactsService = contactsService;
return result;
}
I was following this official auth guide to work with google directory api and got into deprecated GoogleCredential issue as well.
In order to use GoogleCredentials instead of deprecated one to authenticate I did:
// Replaced GoogleCredential builder with GoogleCredentials
GoogleCredentials credentials = GoogleCredentials
.fromStream(new FileInputStream("PATH_TO_CREDENTIALS.JSON"))
.createScoped(DirectoryScopes.MY_SPECIFIC_DIRECTORY_SCOPE_ENUM)
.createDelegated("IMPERSONATED_ACCOUNT_EMAIL");
impersonated.refreshIfExpired();
// Now as old builder was returning different type - initialized HttpRequestInitializer
HttpRequestInitializer requestInitializer = new HttpCredentialsAdapter(impersonated);
// Passed requestInitializer to Directory api Builder
Directory service = new Directory.Builder(httpTransport, jsonFactory, null).setHttpRequestInitializer(requestInitializer).build();
// Rest of it is just using Directory service in this case
Hope this helps
@hmerdok How do I do this on Android though? How can I reach the json file there? I don't think it's really in the APK, no? And what's "DirectoryScopes.MY_SPECIFIC_DIRECTORY_SCOPE_ENUM" ?
|
GITHUB_ARCHIVE
|
The slope of a graph
What does the slope in this graph present
I know this might be a silly question, but my answer was:
slope = VI = Pw (Pw is power)
And the answer in my text book was differet from mine, it was:
slope = -r (r is the internal resistance of the battery)
And I think the book answer was related with the equation:
Vb = V -Ir (where Vb is the emf of the battery, V is the voltage across the battery, (I) is the intensity of current and r is the internal resistnce.
So, are the two answers correct? Or it is only one?
Remember the slope formula ? Can you guess the units for the expression $$\dfrac{V}{I}$$ ?
Slope = $\Delta y/\Delta x$, so $V/I$ in this case. Now, what's Ohm's Law say about $V/I$?
For this formula it will be V/A and this equals ohm Ω, but my graph show an inverse proportionality, and the formula you gave is for direct proportionality. @Hiiii
Hey slope "formula" doesn't depend on how the graph looks. It is always this : slope = (change in y) / (change in x)
@Hiiii, oh, Ok, I got it.
@Asmaa look at the vertical y axis. What is the label on this axis ?
Okay good :) Feel free to ask if something doesn't make sense with the slope
The slope of a graph goes as follows:
The voltage is here on the Y-axis and the current is on the X-axis.
As I grows (so dx > 0), V decreases (dy<0). If you enter this in the formula you get.
Ohm's law states that .
So this means that the slope of this curve equals -R.
Um..slope = \$\Delta V/\Delta I\$
Ohms law states \$V = IR\$
=> \$R = V/I\$
=> Slope = \$R\$
Since the slope is down the resistance is minus.
Your answer is wrong.
Slope of that graph is V/I which is the resistance.
-R as it's -ve slope.
The answer given in your textbook is absolutely correct.
For a cell with internal resistance the voltage available is given by that equation.
The internal resistance of a battery is not negative.
Of course not. But the slope of the graph is. The discussion was about the slope. The slope is -V/I and V/I is R so it's -R.
Its a loadline for determining amplification of tubes or fets or bipolars
|
STACK_EXCHANGE
|
“There is no way I can teach this to someone else; I don’t even know if I know this good enough myself!”
This is what goes through my head every time I talk to anyone who wants to know what I know. Whether this is related to academic work, open-source projects, or occupational tasks, there is always the fear of being called an impostor. Even though I know I can accomplish the things they ask about, for some reason it’s hard for me to translate my scattered mental constructs into an easy-to-understand set of instructions for others. However, over the years I’ve found a few surefire ways of overcoming this self-intimidation among my peers and students: training to teaching.
Training to teach a subject is the best way that I’ve found to learn a subject. For me, teaching what I know is grounded in my desire to learn something and to retain that knowledge. So that is the approach I tend to take when creating a course or preparing to learn a new skill: I learn as if I’m going to become the “expert” in that area of knowledge.
I’ve always loved using the Python programming language. I started using it as part of my work with GIS and spatial analysis in GUI-based tools like ArcGIS and later QGIS. However, it wasn’t until I started doing batch data operations and more complex spatial analysis that I started to really learn the utility of the Python libraries that these GUIs used in the background. Over the years, I became very good as knowing how I use Python to get things done, but not so great as telling others how I did those things.
Fast forward a little over a decade, I’m on my second round of teaching a Python for Data Science course to my local Meetup groups and my academic colleagues. Through the iterative processes of going through materials and then meeting with co-learners to go over the materials, I’ve not only found a way to retain my knowledge, but several ways to help others understand those same concepts. What follows are my biggest takeaways from this process.
The biggest thing to remember as someone with experience: the basics for you are not the basics for a newcomer. You as the teacher and co-learner have to understand that in order to help a newcomer build their foundation, you have to learn how you built yours. No two developers or data analysts think the same nor will they work in the same ways. So how do I effectively help someone learn things that have become intuitive to me as a researcher and data geek?
First and foremost: slow down. One thing I’ve learned in teaching is to take time to watch people working through the foundational concepts in Python. This way you ensure they fully understand not only what they are seeing, but that they can apply those concepts to their Python paradigm (e.g., the reasons and use-cases they started learning Python for in the first place). Ask questions and make sure exercises and demos are placed in the right spots as to not overload their ability to learn Python as a cumulative process (e.g., they are learning Python in a manner that connects basic elements into more complex concepts). You’re their guide through this process, so it’s important not to lose them in the maze of your own thoughts.
Also keep in mind that you may not know everything about Python and that is fine so long as you know what you know correctly. If you approach teaching as a process of co-learning, you and the student get to explore the subject together. This does a couple of really interesting things. First, it lets them know that even experienced people are constantly learning. Secondly, you both get to have a shared learning experience that helps them learn problem-solving techniques and social practices in a working environment. These wouldn’t seem to be key skills to learn in a coding class, but they often prove critical in interviews and workplaces where working as part of a team requires a self-reliant person with effective social decorum to ensure tasks are understood fully and accomplished on time.
Keeping the above points in mind, it is important to start small and build in a way that best suits the students (and in some cases yourself). It is okay to build a very ambitious curriculum for a class, but if you’re losing students you have to take a step back and ask why. Flexibility and reflection are critical in teaching as it can help you and the students to meet common goals and more importantly, mutually-shared small victories. Building a simple hello world script may not seem very huge to a teacher, but that may be just what a student needs to say, “I can do this, what’s next!” This means having a course modular enough to adjust direction with scenarios that help them understand how small pieces build towards a bigger picture.
Tailor the Material
As mentioned above, a well conceived course is only as good as it’s ability to be used by a student. This is why auditing your students (if possible) should be one of the first things done in your class. A well-design survey can help preface your teaching expectations before the students show up for the first day of class. Next, spending the first part of the course getting to know what they expect to get out of the lessons as well as what they may perceive as a challenge will help you better adjust course materials. Often a class is not about teaching a subject to students: it’s about helping them to ask better questions for themselves using the tools, techniques, and methods you offer.
This is where building the course to your strengths meets with the strengths you want to build for them and for yourself. If you understand how the class needs and desires to use what you offer, it is the first step in building a course that you and they will enjoy. This means that you as a teacher will have to initially prepare for rapid revisions in the material and learning goals. There is a saying: “Proper planning and preparation prevents piss-poor performance.” This is especially true of teaching a technology that is constantly evolving as the demands surrounding the technology evolve and directly affect it.
The above teaching tasks depend heavily on your ability to obtain and use feedback from your students and to apply it quickly to ensure they are getting the most from the class. This refers back to two concepts mentioned above: flexibility and modularization.
Flexibility in the face of feedback from the class depends heavily on your depth of knowledge and your ability to hack that knowledge for your students’ benefit. If they are having issues with data structures, make sure you have separate demos and exercise that focus on those challenges outside of the primary scenario for the class, but designed with the intent to bring those learned skills back into the class overall.
This is a part of keeping course materials modular. For the teacher, it is useful to help students build a foundation from several small pieces of knowledge. For the student, it is useful to have a bunch of small victories through building tiny pieces of the puzzle that they can later use in building bigger solutions.
Teaching Python for Data Science
In order to bring the above concepts together, I’ll bring you through a brief example of my experience co-learning and then teaching Python in the domain of data analysis and data science.
In 2017, I founded my second data science group. Previously, I had mainly organized speakers as my first data science group was in a bigger metropolitan area where there were already established data professionals who wanted to use the group for professionalization and networking. However, this new group was eager to learn new skills and to build community through a co-learning education process. I decided to build my first Python for data science course as an immersion course. This was very much a co-learning approach where there are not direct leaders: just a community of like-minded folks reading the same books and going through the same exercises that they would talk about on Slack and in person once a month.
The turnout for the first event was great. The place was packed despite the bad weather which is always a good sign. However, once they learned about the format, the numbers of attendees dropped off drastically. Over the 5 months of this course, it went from about 30 people interested in learning to only 3 who completed projects (one of which being me). So I immediately asked for feedback from those who attended and the larger community. It turned out that most people just wanted direct instruction with usable examples and well-organized code snippets they could use in their work. This was my eureka moment.
I realized that I hadn’t done any of the things I have mentioned above. I failed to know the people interested in the course and I failed to make it relevant to them. For those who stuck around, I failed to adjust material and format to ensure they stayed interested in not just learning Python, but also in building a bigger, local Python community.
So I rebuilt my course from the ground up. I made it more modular and left space for pauses, exercises, and demos. I make sure I’m constantly asking for feedback and that if they have issues or questions, that I’m adding those into the course material for our benefit. Once I was able to do these things, I found that more students stuck around for the course and that feedback was much better. Upon completion of my second iteration of teaching/co-learning, I already have requests for additional topics and for another run of the introductory materials.
I still have a long way to go as a teacher and as a Pythonista, but I found that bringing my strengths and weaknesses to the table helps everyone so long as I maintain my passion for the subject and help newcomers foster their own passions and motivations for learning and growing their skills.
When building course materials for teaching Python, I drew on many sources. Many thanks to Wes McKinney, Jake VanderPlas, Dan Bader, Mike McKerns, Dan Dye, John Zelle, David Beazley, and many other people who made learning and using Python a blast!
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
|
OPCFW_CODE
|
I’m going to work through the Responsive section of the coursework again because it just didn’t click, but I was wondering if I could get some suggestions.
I’m trying to get some extra practice in by working on building a very basic website as a surprise for my friend’s wedding, but I noticed that when I reduce the width of the browser, the H1 elements go up and off the page, and the form aligns weirdly.
I’m also having trouble with the margin or padding being off on the header on the RSVP website. There’s about 5 pixels on the top-left that doesn’t want to work unless is put a margin: -5px; there, but I don’t know what’s causing it to show up there in the first place.
Can I get any tips on that, please?
Home HTML page CodePen: https://codepen.io/wordsthatrhyme/pen/ZEpydMO
RSVP HTML page CodePen: https://codepen.io/wordsthatrhyme/pen/WNGOVvQ
The RSVP page loaded differently in CodePen than how I have it locally. The content is centered and not all the way up at the top in my local save.
P.S. The background images work on my local file, but I don’t feel like linking them in here since I would have to find the website that they are hosted in again, so please bear that no mind.
@wordsthatrhyme You don’t need to write
<head> tag and
<body> tag! Codepen have written for users in the iframe tag for your output preview.
If you want to add some tag to
tag there is a tool in codepen here:
Click the setting and click the HTML and you can see it.
Thank you. I’ll make those changes now. Do you have any other recommendations on making this a responsive website outside of CodePen?
Looks like a fun project.
I noticed that if you change your main style to this…
…it seems to make the h1 element behave better responsively.
Also, the RSVP link is white, I can barely see it. But it might go with the background.
Hope this helps.
PS: Many people use the mobile-first approach, and I’ve found it helpful. Design for small screens first, then move to larger screens.
Thank you so much!
I’ll change the CSS for the main element in my local file and Codepen I’m back at my computer and see how it goes.
I’m trying to figure out the mobile-first method so that I can use that for my next project.
I might just move my current CSS file to another folder and start from scratch with the mobile-first method in this project as well.
Do you prefer media, grid, or flexbox?
I personally prefer flex box, but I’m pretty new to coding, so I just might not have enough practice with grid.
Flex and grid aren’t a substitute for media queries. I still need media queries for responsiveness whether I use flex, grid, or positioning.
Thanks Adrew!!..It is useful me too
@wordsthatrhyme, in both of your pens there is a typo;
@media only screens and (max-width:600px)
“screens” is not a valid media type. It should be “
Your media queries will work when you correct the typo.
Thank you so much! I appreciate your suggestions.
Thank you so much!
I’ll correct that now. I thought I didn’t have a good understanding of it since I couldn’t get it to work.
|
OPCFW_CODE
|
Areas of NonStop-specific configuration
Certain areas of configuration are specific to IBM® MQ on HPE NonStop.
Location and size of Guardian data files
IBM MQ for HPE NonStop V8.1 stores all dynamic data in audited Enscribe files. The data includes dynamically changing persistent queue data and object descriptions (such as channel definitions). The Guardian subvolume of these files and (for queue data files) primary extents, secondary extents, and maxextents can be configured by using runnscnf. See The runnscnf tool for details.
Processes for non-persistent messages
Non-persistent queue messages are kept in main memory and managed by specific processes called cache managers. You can configure multiple per-queue manager cache managers. Cache managers can run on a different CPU than the queue manager, providing some scalability. Each cache manager is responsible for storing non-persistent messages of a configurable set of IBM MQ queues. The relationship between cache managers, IBM MQ queues, CPU, and so on, can be configured by using runnscnf.
Set signal settings
IBM MQ for HPE NonStop
V8.1 supports the MQGET SET_SIGNAL feature in
the same way as IBM
WebSphere® MQ for HP Integrity NonStop Server
V5.3. The signal can be used from programs as for
WebSphere MQ for HP Integrity NonStop Server
V5.3. As the implementation for version 8 is different from
version 5.3, there are some settings available from runnscnf. The MQGET
SET_SIGNAL feature in IBM MQ V8 is implemented using a
new process class called
SetSignalManager processes can be configured with one process doing the work for
SetSignalManager processes can run on different CPUs.
SetSignalManager processes are configured by using runnscnf.
IBM MQ for HPE NonStop V8.1 can optionally issue more, and more detailed, EMS messages than WebSphere MQ for HP NonStop Server V5.3. The extent of EMS messages issued and the collector used can be configured by using runnscnf.
Queue manager global settings
There are certain NonStop-specific settings that are global to the queue manager. These settings include the home terminal being used, the CPU, the priority of certain NonStop-specific processes, tuning parameters, CPU set assignment for each queue manager, settings for fault tolerance, default TCP transport, and so on. All these settings are available by using runnscnf.
User name mapping
User names within IBM MQ have a length limit of 12 characters per name. NonStop user names can be up to 17 characters long. On NonStop, names consist of group name, a dot (‘.’), and a user name, like: MYGROUP1.USER1. Group name and user name can each be up to 8 characters long, so the name of a regular NonStop user can be up to 17 characters long and thus does not fit in the data structures used by IBM MQ to store user identification.
WebSphere MQ for HP NonStop Server V5.3 provided the tools dspmqusr and altmqusr to create a mapping between NonStop-specific user names and names used within IBM MQ. This mapping can be used to establish a one-to-one relationship between IBM MQ internal names and NonStop user names. These tools are available in IBM MQ for HPE NonStop V8.1, with similar syntax and functionality to the WebSphere MQ for HP NonStop Server V5.3 tools. See altmqusr and dspmqusr.
IBM MQ user names are sometimes referred to as Principal Names. The mapping described here is stored in an internal database. The crtmqm command creates this database and adds an entry for the user who ran the installation script (the installation owner). The principal created is always ‘mqm’, for compatibility with other IBM MQ implementations.
After you have created a queue manager, you can create entries in the database for other users of the queue manager.
You can use IBM MQ standard mechanisms to authorize NonStop users outside of the MQM group (which is reserved for IBM MQ administration) to use certain features and/or resources of an IBM MQ queue manager. For this authorization to be effective, the following conditions must be met:
- A mapping must be present in the internal database.
- Authorization must be explicitly granted to all resources accessed (see the setmqaut command).
NonStop users within the MQM group (other than the installation owner) can administer (create, start, stop, secure, and so on) any queue manager within the installation. No user name mapping is required to do so.
Applications require a mapping entry to use a queue manager (put, get, browse, and so on). If no
mapping entry is found, application access is refused. No explicit or additional authorization (via
setmqaut) is required, however.
|
OPCFW_CODE
|
Hello,I am running db2 10.1 with spatial extender.
I am trying to insert a raw into a table with a spatial area column.
Here is the table def:
CREATE TABLE test1.areas1 (area db2gse.ST_polygon, id integer)
When I try to perform the following insert:
insert into test1.areas1(id,area) values(1,db2gse.ST_PolyFromText('polygon((33.039749731553464 34.01353063350853,33.04575109023584 34.04494032719386,33.03557814117029 34.010830886211814, 33.039749731553464 34.01353063350853))',0))
I get the following exception:
Routine "DB2GSE.GSEGEOMFROMWKT" (specific name "GSEGEOMWKT1") has returned an error SQLSTATE with diagnostic text "GSE3421N Polygon is not closed.".. SQLCODE=-443, SQLSTATE=38SSL, DRIVER=3.63.108
It seems to me the polygon is closed, what could be the problem?
This topic has been locked.
Pinned topic db2 spatial extender - insert fails due to unclosed polygon
Answered question This question has been answered.
Unanswered question This question has not been answered yet.
DavidWAdler 1200006AT714 Posts
Re: db2 spatial extender - insert fails due to unclosed polygon2012-12-18T12:20:39ZThis is the accepted answer. This is the accepted answer.The problem is due to using the spatial reference system (SRS) with the identifier '0'. This SRS is not associated with a coordinate system and is seldom used, although it may be appropriate for integer coordinates.
Spatial Extender stores coordinates internally as 64-bit integers for efficiency in processing. The offset and scale values of an SRS convert between the external floating point representation and internal integer representation. The offset and scale values need to be set so that coordinate precision is not lost.
SRS 0 uses an offset of 0 and scale of 1. This means that the coordinates you specified are basically converted to integers, (33 34, 33 34, 33 34, 33 34), collapsing to a single point.
What do these coordinates represent?
If they are North America latitude and longitude, SRS 1 is appropriate and if they are other worldwide coordinates, SRS 1003 is appropriate. SRS 1 has a multiplier of 1,000,000 and SRS 1003 has a multiplier of 5,965,232.
|
OPCFW_CODE
|
Feat/mulitple filters
The core of this feature is to add two new methods
addRecordFilter
addFilter
Unfortunately, I did something I tell others not to do and merged some cleanup and it got mixed in so I will enumerate. Sean / Johnny, I think maybe a quick walkthrough if you need it. Things also in here:
Make sure that html attributes are set to represent the properties. Best Practice
A slight reshuffle of the configuration UI elements to help clear up confusion on the filtering options. Dave to comment further
Along with the tests I have added, I did verify by modifying the element and adding a second set of filters and made sure that it would work correctly. I removed them cause it looks clunky until we get some updates from the lightning team on more flexible configuration UI support.
Still need to work further with Gigi on producing full samples of how to use this in a composite component.
The core of this feature is to add two new methods
addRecordFilter
addFilter
Unfortunately, I did something I tell others not to do and merged some cleanup and it got mixed in so I will enumerate. Sean / Johnny, I think maybe a quick walkthrough if you need it. Things also in here:
Make sure that html attributes are set to represent the properties. Best Practice
A slight reshuffle of the configuration UI elements to help clear up confusion on the filtering options. Dave to comment further
Along with the tests I have added, I did verify by modifying the element and adding a second set of filters and made sure that it would work correctly. I removed them cause it looks clunky until we get some updates from the lightning team on more flexible configuration UI support.
Still need to work further with Gigi on producing full samples of how to use this in a composite component.
Yeah, I think a quick walkthrough would be great.
The core of this feature is to add two new methods
addRecordFilter
addFilter
Unfortunately, I did something I tell others not to do and merged some cleanup and it got mixed in so I will enumerate. Sean / Johnny, I think maybe a quick walkthrough if you need it. Things also in here:
Make sure that html attributes are set to represent the properties. Best Practice
A slight reshuffle of the configuration UI elements to help clear up confusion on the filtering options. Dave to comment further
Along with the tests I have added, I did verify by modifying the element and adding a second set of filters and made sure that it would work correctly. I removed them cause it looks clunky until we get some updates from the lightning team on more flexible configuration UI support.
Still need to work further with Gigi on producing full samples of how to use this in a composite component.
Yeah, I think a quick walkthrough would be great.
Closed by mistake. Reopen
Closed by mistake. Reopen
Hi guys @pozil @LGraber , may I ask if you're going to merge this and release any time soon? I was actually looking to fulfil the requirement to have more than one record filter.
It's all good on my end. I'll let @LGraber hit that green button :)
hi @PalGenadich - We are reviewing the code to be sure that we are enabling developers to add multiple filters when embedding our component.
Tableau Viz LWC is being Archived
All outstanding PRs and Issues are being addressed and closed. For more information, please see the README
|
GITHUB_ARCHIVE
|
There is an opportunity to improve the user�s interface for the windows in
Windows by setting the transparency. In Windows versions beginning from 2000 it
is implemented by setting the
WS_EX_LAYERED style for the window.
But here we are faced with a problem � the style can�t be set for the child
windows. The semi-transparent window looks showy but everything that is drawn
on it (including controls) will also be semi-transparent. And it considerably
worsen the ergonomics of the user�s interface. The article
SkinTooltip control developed by the KB_Soft Group
company for its internal needs that allows the effect of the non-transparent
controls on the semi-transparent window to be reached. The approach described
below also allows different animation to be implemented on the control while
displaying it. It is necessary to note that the control is not really
semi-transparent but it only imitates the transparency. In this connection,
some restrictions to its usage appear that will be described below.
2. The description of the control�s work
The essence of our approach to the solution of the given problem is as follows.
The control itself (and all the child controls on it) is not semi-transparent.
The semi-transparency of the window is imitated. Two steps are performed for
On the first step the screenshot of the parent window is made using the
public static void GetControlScreenshot( Control control, ref Bitmap bitmap )
if (control == null || control.Handle == IntPtr.Zero)
if (control.Width < 1 || control.Height < 1) return;
if (bitmap != null)
// preparing the bitmap.
bitmap = new Bitmap(control.Width, control.Height);
Graphics destGraphics = Graphics.FromImage(bitmap);
// setting the flag indicating that we need to print both the client's and
// the non-client's window rectangles.
int printFlags = (int)( Win32API.PRF_NONCLIENT | Win32API.PRF_CLIENT);
System.IntPtr param = new System.IntPtr(printFlags);
System.IntPtr hdc = destGraphics.GetHdc();
// sending the WM_PRINT message to the control window.
Win32API.SendMessage(control.Handle, Win32API.WM_PRINT, hdc, param);
The function draws the control window to the �bitmap�
bitmap. It is
made with the help of the
SendMessage Win32 function. It sends the
WM_PRINT message to the window, its parameters specify the device
context for output. The controls of the parent window are also drawn at the
obtained image. Then the image is displayed on the control�s surface and as a
result the control becomes invisible on the parent window. On the second step
the background of our semi-transparent window is set to another object of the
Bitmap class. The background should present the image with the set
semi-transparency (i.e. having the alpha-channel). All child controls are drawn
on the image (it is needed to implement animation). The obtained image is drawn
above the background already displayed and all the controls become visible. As
a result we have the effect of semi-transparent window with the non-transparent
controls. Animation is performed using the image obtained on the 2 step. To
reach the effect of the control�s smooth appearing on the background of the
parent window, the alpha value for each
bitmap�s pixel is multiplied according to the timer�s message on a multiplier in
the range from 0 to 1.0; as a result, the image varies from the complete
transparency to the value initially set in the image that is specified as a
control�s background on the 2 step. To perform it, the .NET Framework library
has a standard mechanism basing on the
ColorMatrix class. Using
the class You may specify the transformations to be done with the image�s
colors before it is displayed on the screen. Some restrictions concerning the
control�s usage result from the described algorithm. Since the screenshot of
the state of all the controls on the parent window is made only once before
displaying, the changes in their appearance may break the illusion of
semi-transparency. Any changes in the child controls on the semi-transparent
window after its first initialization may also lead to the control�s
malfunction. The given restrictions were not critical for the task KB_Soft Group
was faced with while developing the given controls, but the components
elaboration may be necessary for other cases.
3. The description of the control�s usage
The control was inherited from the
SkinControl base class. The
class usage was described in the �The controls of an arbitrary shape� article.
The most important properties and functions of the class are described below.
� the property setting whether to use the animation using smooth transparency
� the property setting the time interval between two animation steps.
� the property setting the position to start the control�s animation.
� the property setting the number of animation steps.
- the property setting the animation type by changing the control�s sizes.
Labels � the collection of non-transparent labels on the
� the property setting the animation type by changing the control�s sizes
(moving, stretching and so on).
� the property setting the control�s background image.
Animate() � the function starting the control�s animation. The
simplest way to use the control is to add it to Toolbox in the Visual Studio
environment and to set all its properties. But for more deliberate usage of the
control given below is its manual creation: Create a new Windows-application
project on C#. Add a new resource � the result.png image. It will be our
control�s background and also a pattern for specifying its shape. It is
necessary not to forget to specify Build Action = Embedded Resource in the
Add the application a link to the KBSoft.Components.dll library, and add the
using directives to the beginning of the file
containing the form:
Now add the class a new item:
private SkinTooltip skinTooltip = new SkinTooltip();
private Button btn = new Button();
Add the following code to the constructor:
//getting the assembly for extracting the resources.
Assembly currentAssembly = Assembly.GetAssembly( this.GetType() );
//setting the pop-up animation.
skinTooltip.AlphaAnimation = true;
//setting the time interval between animation steps.
skinTooltip.AnimationInterval = 40;
//the position to display the control.
skinTooltip.AnimationPosition = new System.Drawing.Point(80, 24);
//the number of animation steps.
skinTooltip.AnimationSteps = 20;
//setting not to use animation by changing the control�s sizes.
skinTooltip.ExpandAnimation = false;
//setting the image of the control�s background.
skinTooltip.FrontImage = (Bitmap)Bitmap.FromStream(
//setting the control�s shape.
skinTooltip.PatternBitmap = (Bitmap)Bitmap.FromStream(
//setting the color of the image parts that are not included to the control�s
skinTooltip.TransparentColor = Color.FromArgb( 255, 255, 255 );
//creating the button and adding it to the list of the controls that are child
for the skinTooltip control.
btn.Size = new Size(50,30);
btn.Location = new Point(150,30);
btn.Text = "Demo";
btn.FlatStyle = FlatStyle.System;
skinTooltip.Controls.Add( btn );
//the call needed for the control to function correctly.
//necessarily set the control�s parent window.
skinTooltip.Parent = this;
The result is shown in the screenshot below.
|
OPCFW_CODE
|
Android Studio: disabling "External build" to display error output create duplicate class errors
I am starting my migration from Eclipse to Android Studio, and start playing with new projects on Studio.
My test project was working fine till I got some errors messages.
I had to do some manipulation (https://stackoverflow.com/a/16876993/327402) to enable the error output to display, and found the issue that I fixed.
Unfortunately, after this "workaround" (Why the hell have I to make such things to see my errors?), I found that there was an error message that I cannot fix:
error: duplicate class: com.mypackage.name.BuildConfig
error: duplicate class: com.mypackage.name.R
I also noticed that I am not the only one to have this issue (see the comment in the SO answer I linked above)
First time, I was able to fix it by enabling "External build" again, but that happened again, because I needed to see the error output and everything is now broken, and I cannot find what happen.
With Eclipse, the R file was easy to find, in the gen folder, but with Android Studio, there are too many files, and I am a little bit lost.
Any idea/suggestion?
Good question, I've spent one whole day to this and have given up for the time being.
See my answer, upgrading Android Studio to 0.1.5 fixed everything!
So, just to let you know...
A few minutes after I posted my question, Google released an update to Android Studio (0.1.5)
See link: https://plus.google.com/+AndroidDevelopers/posts/Y9vhvGaHCbh
Tor Norbye kindly answered my question in this community, and I am sharing here
So the workaround I quoted in the OP is no more mandatory.
Enabling again External build after upgrading Android Sudio let me see the real errors ( a library and some Gradle import that I fixed)
So, I consider the Android Studio upgrade as the best answer to this question...
So we can now see errors within Android Studio even if Use External Build is checked?
I've found a question like this that has some replies here:
Cannot resolve R.java, duplicate class
You can try this:
Delete the Build folder generated by Android Studio automatically
Also you can try to Rebuild project by clicking Build->Rebuild project after deleting build folder.
Unfortunately, this and many other answers has nothing to do with android studio. There is no clean, and there is no gen folder. And where can we find Java Build Path thingy.
EDITED * (Misunderstood your comment). Removed Eclipse info from answer, sorry.
Ok, I also have the same problem, and this is what worked for me.
I first unticked external build from compiler settings. Then when I compiled i get two errors related to R.java, duplicate class.
Then i delete the build folder manually from finder. Then rebuild it from android studio, but still same error.
Then I again go back to compiler preference and tick the external build setting, and it worked fine after that.
Looks like some bug.
A variation of this answer worked for me: (1) disabled external build; (2) try to run the project and failed; (3) manually delete the build folder; (4) run the project again. After trying to run it I got the dialog to select either a phone or emulator.
apparently, or at least for me 0.1.5 has a bug and cannot run in external build
because of some path error you can read about here
https://code.google.com/p/android/issues/detail?id=56628
so i switched to internal building, and then i hit the double R symbol bug
after deleting the build path application is compiled without errors but the build folder is not fully rebuilt, I'm missing the R.java file which the internal builder does not make.
i've rebuilt it on the external builder on an unupdated version as a temp workaround while this issues is fixed.
btw if anyone knows how to tell internal gradle to rebuild the build folder please share.
I just had the same problem and found a way to solve it (You can do this with Android Studio open):
Go to you Android Studio projects folder located in c:\users[USERNAME]\AndroidStudioProjects
Locate your project folder and open it.
Delete the folder named build.
From here, enter your application name folder, where you can find another build folder among with libs and src folders and a file named build.gradle.
RENAME the build folder to build2 or something else.
Now in Android Studio go to Build->Rebuild Project.
And after the project was rebuilt, open again the folder where the build2 folder that you renamed is located.
Delete the new folder named build that was created.
Rename your build2 folder to build again.
Done.
|
STACK_EXCHANGE
|
- Chollet’s Deep Learning Book- Why it is a Must-Have for AI Enthusiasts
- What makes Chollet’s Deep Learning book so special?
- An in-depth look at the content of Chollet’s Deep Learning book
- How Chollet’s Deep Learning book can help you become a better AI practitioner
- What other AI books are available that can complement Chollet’s Deep Learning book?
- How to make the most out of Chollet’s Deep Learning book
- An overview of the deep learning field for AI beginners
- Why deep learning is a critical component of AI
- How Chollet’s Deep Learning book can help you stay ahead of the AI curve
- Further reading and resources on deep learning and AI
If you’re into AI and deep learning, then you need to check out Chollet’s new book. It’s a must-have for anyone interested in the topic.
Checkout this video:
Chollet’s Deep Learning Book- Why it is a Must-Have for AI Enthusiasts
If you are an AI enthusiast, then you must have Chollet’s Deep Learning book. Packed with eight chapters of material, thistextbook provides a comprehensive understanding of the subject.
Some of the key topics that are covered in the book include: neural networks, artificial intelligence, Deep Learning architectures, training Deep Learning models, and more. In addition, the book also comes with three appendices that offer supplementary information on the topic.
With its vast coverage and simple language, Chollet’s Deep Learning is a great resource for anyone who wants to learn about this cutting-edge technology.
What makes Chollet’s Deep Learning book so special?
Chollet’s Deep Learning book is crammed full of practical information that you can immediately put to use in your own AI projects.
Divided into three sections – fundamental concepts, modern techniques, and applications – the book covers everything you need to know about deep learning. Chollet also provides extensive code examples using Keras, one of the most popular deep learning frameworks.
If you are serious about pursuing a career in AI or want to take your existing skills to the next level, Chollet’s Deep Learning book is a must-have.
An in-depth look at the content of Chollet’s Deep Learning book
If you’re looking for a comprehensive, in-depth look at deep learning, then you need to check out Francois Chollet’s Deep Learning book. Chollet is the creator of the Keras deep learning library, and his book is widely considered to be one of the best on the subject.
In Deep Learning, Chollet covers everything from the basics of neural networks to advanced concepts such as convolutional and recurrent neural networks. He also includes printouts of Keras code throughout the book so that readers can follow along and see how deep learning concepts are implemented in practice.
If you’re serious about learning deep learning, then Chollet’s Deep Learning book is a must-have.
How Chollet’s Deep Learning book can help you become a better AI practitioner
Deep learning is a branch of machine learning that deals with algorithms that learn from data that is deep in structure, or hierarchical. The main difference between deep learning and other machine learning techniques is the number of layers in the network. Deep learning networks can have dozens or even hundreds of layers, while shallow neural networks only have a few.
Chollet’s book, Deep Learning with Python, is a great resource for anyone who wants to learn more about this exciting field. The book covers all the basics of deep learning, including how to train and deploy deep learning models. It also includes an overview of some of the most popular deep learning frameworks, such as TensorFlow and Keras.
What other AI books are available that can complement Chollet’s Deep Learning book?
There are many different books available on the topic of AI, and it can be difficult to know which ones are the most helpful. Chollet’s Deep Learning book is a great resource for anyone interested in learning more about AI, but there are other books available that can complement it. Some of the other AI books that are available include:
-Hands-On Machine Learning with Scikit-Learn and TensorFlow by Aurélien Géron
-Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig
-Deep Learning by Geoffrey Hinton, Yoshua Bengio, and Aaron Courville
-Machine Learning: A Probabilistic Perspective by Kevin Murphy
How to make the most out of Chollet’s Deep Learning book
Assuming you have some understanding of Deep Learning, I would recommend the following approach to make the most out of Chollet’s book:
1. First, read the whole book from start to finish. This will give you a broad understanding of the topics covered and how they all fit together.
2. Next, go back and re-read each chapter, this time focusing on the code examples. Run the code for yourself and experiment with changing various parameters to see how it affects the results.
3. Finally, once you feel comfortable with the concepts and code, try implementing some of the algorithms yourself from scratch. This will really help solidify your understanding.
An overview of the deep learning field for AI beginners
Deep learning is a field of machine learning that utilizes neural networks to learn from data. Neural networks are a type of artificial intelligence that are modeled after the brain. Deep learning allows machines to automatically improve their own performance by increasing their own ability to recognize patterns.
The deep learning field has seen rapid advancements in recent years, and it is now being applied to a variety of tasks such as image classification, object detection, and natural language processing. Deep learning is also becoming increasingly important for applications such as autonomous vehicles, where it is used to help machines understand the environment and make decisions.
Chollet’s Deep Learning book is a great resource for anyone who wants to learn more about this exciting field. The book provides an overview of the deep learning field and its history, as well as a detailed explanation of how neural networks work. The book also covers a variety of deep learning applications and provides code examples in Python.
Why deep learning is a critical component of AI
Deep learning is a critical component of AI, and Chollet’s book is the best resource for understanding its potential. The book covers the history and current state of deep learning, how it works, and its applications. Chollet also offers clear explanations of important concepts such as neural networks, convolutional neural networks, and recurrent neural networks.
How Chollet’s Deep Learning book can help you stay ahead of the AI curve
If you’re looking to get ahead of the curve in the AI field, Chollet’s Deep Learning book is a must-have. Packed with information on the latest advances in the field, it will help you stay current and on the cutting edge.
Further reading and resources on deep learning and AI
If you’re interested in delving further into deep learning and artificial intelligence (AI), then Chollet’s book is an excellent choice. It provides a detailed, technical overview of the subject matter, yet is also highly accessible and readable for those without a background in computer science or mathematics.
In addition to Chollet’s book, there are a number of other excellent resources on deep learning and AI. For practitioners, the Deep Learning Specialization on Coursera offers a comprehensive, hands-on introduction to the subject. For those looking for a more conceptual overview, consider watching Geoffrey Hinton’s Neural Networks Lecture Series (available on YouTube). Finally, for those interested in staying up-to-date with the latest research in deep learning, I would recommend subscribing to the Deep Learning Research Review newsletter.
|
OPCFW_CODE
|
Fantasticnovel fiction – Chapter 181 – : I Accidentally Break Through Too mysterious identify -p2
Novel–Complete Martial Arts Attributes–Complete Martial Arts Attributes
Chapter 181 – : I Accidentally Break Through Too earsplitting unfasten
“Zhuo Tai suppressed his level and breakthrough discovery throughout a fight!”
“No, that’s just his after shadow!” someone shouted.
The ‘w.a.n.g Teng’ that obtained struck slowly faded. It had been just a residual shadow.
Currently, Zhuo Tai calmed down alternatively. He explained softly, “It isn’t practically the battle of dignity between us. It’s a conflict of martial arts training trails. Your appearance didn’t simply make me lose my Third Section Room No. 1 dormitory. For the everyday pupil as i am having an common history, the school’s information are every little thing We have.
Eventually, this has been the result…
“No, that’s just his after shadow!” somebody shouted.
w.a.n.g Teng waved his fists. His basic combat method was joined in just about every attack. He started off slamming his fists at Zhuo Tai without preventing.
Zhuo Tai held a struggle sword in his hands. His atmosphere was solid, and his awesome gaze appeared to be shooting daggers. It had been extremely well-defined.
The Corner House Girls Among the Gypsies
“Why isn’t w.a.n.g Teng getting his tool?” Someone frowned.
Zhuo Tai’s students restricted viciously. His velocity higher because he retreated. At this moment, w.a.n.g Teng was delivering him a very hazardous sensing.
The target audience was baffled.
He just didn’t pick to get it done.
“His footwork has exceeded the realms of any simple challenge procedure. What amount is his footwork at? Mastery? Profile enlightenment?”
Zhuo Tai was furious. He possessed already misplaced command over his thoughts and mindset. He didn’t cover up his potential any longer. The Force suppressed in their Drive nucleus exploded right away.
“Could it be some footwork Force combat process?”
Truly, this approach was extremely simple. Common martial warriors might be unable to take action soon enough. Nevertheless, w.a.n.g Teng obtained time to interrupt Zhuo Tai’s leveling up process.
How was this possible?
The cheers stopped suddenly. The second-calendar year individuals have been like ducks who received strangled by their necks. Their mouths have been exposed, yet they couldn’t make any appears to be.
“What information? I don’t maintenance. However, in order to step on me to go onward, then I’m sorry I will make sure you get into an abyss.
Is the place you get the self-assurance? This is among the alternatives I thought.
Is where you ensure you get your self-confidence? This is among the options I guessed.
Conversely, w.a.n.g Teng didn’t hold any weapons. He endured there within a relaxed manner, his position seemingly loaded with opportunities.
The very first-season learners got light encounters. They were frightened through this effective attack. w.a.n.g Teng still lost…
Zhuo Tai was furious. He experienced already suddenly lost command over his inner thoughts and mentality. He didn’t cover up his skill anymore. The Push suppressed as part of his Push nucleus exploded right away.
the lure of flame emblem code
what is free trade definition
w.a.n.g Teng and Zhuo Tai stood experiencing the other.
The total stand up was quiet. Everybody was flabbergasted.
This profile originated in w.a.n.g Teng.
“What a quick pace!”
|
OPCFW_CODE
|
Example with CollapsingMergeTree
Hi, thank you for the project. It seems really promising in the Kafka-Clickhouse space.
I am trying a variation of the employees example, where the employees table in Clickhouse is already created and has CollapsingMergeTree engine, but I am getting errors in the sink side.
I am also using the 2022-12-28 tag of the docker image.
I am creating the table with this command:
CREATE TABLE employees.employees
(
`emp_no` Int32,
`birth_date` Date32,
`first_name` String,
`last_name` String,
`gender` String,
`hire_date` Date32,
`_sign` Int8,
`_version` UInt64
)
ENGINE = CollapsingMergeTree(_sign)
PRIMARY KEY emp_no
ORDER BY emp_no
SETTINGS index_granularity = 8192;
and the errors I am seeing are like the following:
2023-01-02 11:41:32,374 INFO || *** QUERY***insert into employees(emp_no,birth_date,first_name,last_name,gender,hire_date,_sign,_version,_sign) select emp_no,birth_date,first_name,last_name,gender,hire_date,_sign,_version,_sign from input('emp_no Int32,birth_date Date32,first_name String,last_name String,gender String,hire_date Date32,_sign Int8,_version UInt64,_sign Int8') [com.altinity.clickhouse.sink.connector.db.DbWriter]
sink | 2023-01-02 11:41:32,382 INFO || *** QUERY***insert into employees(emp_no,birth_date,first_name,last_name,gender,hire_date,_sign,_version,_sign) select emp_no,birth_date,first_name,last_name,gender,hire_date,_sign,_version,_sign from input('emp_no Int32,birth_date Date32,first_name String,last_name String,gender String,hire_date Date32,_sign Int8,_version UInt64,_sign Int8') [com.altinity.clickhouse.sink.connector.db.DbWriter]
sink | 2023-01-02 11:41:32,385 ERROR || ******* ERROR inserting Batch ***************** [com.altinity.clickhouse.sink.connector.db.DbWriter]
sink | java.sql.SQLException: Missing value for parameter #7 [_sign Int8]
sink | at com.clickhouse.jdbc.SqlExceptionUtils.clientError(SqlExceptionUtils.java:73)
sink | at com.clickhouse.jdbc.internal.InputBasedPreparedStatement.addBatch(InputBasedPreparedStatement.java:320)
sink | at com.altinity.clickhouse.sink.connector.db.DbWriter.addToPreparedStatementBatch(DbWriter.java:458)
sink | at com.altinity.clickhouse.sink.connector.executor.ClickHouseBatchRunnable.flushRecordsToClickHouse(ClickHouseBatchRunnable.java:228)
sink | at com.altinity.clickhouse.sink.connector.executor.ClickHouseBatchRunnable.processRecordsByTopic(ClickHouseBatchRunnable.java:189)
sink | at com.altinity.clickhouse.sink.connector.executor.ClickHouseBatchRunnable.run(ClickHouseBatchRunnable.java:105)
sink | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
sink | at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
sink | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
sink | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
sink | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
sink | at java.base/java.lang.Thread.run(Thread.java:829)
sink | 2023-01-02 11:41:32,388 ERROR || ******* ERROR inserting Batch ***************** [com.altinity.clickhouse.sink.connector.db.DbWriter]
sink | java.sql.SQLException: Missing value for parameter #7 [_sign Int8]
sink | at com.clickhouse.jdbc.SqlExceptionUtils.clientError(SqlExceptionUtils.java:73)
sink | at com.clickhouse.jdbc.internal.InputBasedPreparedStatement.addBatch(InputBasedPreparedStatement.java:320)
sink | at com.altinity.clickhouse.sink.connector.db.DbWriter.addToPreparedStatementBatch(DbWriter.java:458)
sink | at com.altinity.clickhouse.sink.connector.executor.ClickHouseBatchRunnable.flushRecordsToClickHouse(ClickHouseBatchRunnable.java:228)
sink | at com.altinity.clickhouse.sink.connector.executor.ClickHouseBatchRunnable.processRecordsByTopic(ClickHouseBatchRunnable.java:189)
sink | at com.altinity.clickhouse.sink.connector.executor.ClickHouseBatchRunnable.run(ClickHouseBatchRunnable.java:105)
sink | at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
sink | at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
sink | at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
sink | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
sink | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
sink | at java.base/java.lang.Thread.run(Thread.java:829)
It seems that 2 _sign columns are being added in the insert statement.
Could you let me know of potential configuration changes I need to do, as well as update the mutable_data page with a CollapsingMergeTree example?
Thanks
I would recommend to use the ReplacingMergeTree engine instead. The sink connector is fully tested with this engine and I think we should explicitly de-support Collapsible engines. There are a lot of development in the ReplacingMergeTree space see for example https://altinity.com/blog/replacingmergetree-and-crazy-clickhouse-stuff-the-december-sf-bay-area-clickhouse-meetup
Thanks @aadant. I wasn't aware of the new parameters to improve final performance so we will give them a go. We were hoping to deduplicate our tables deriving from postgres tables using the CollapsingMergeTree engine instead, and that's why I asked on their integration with the sink connector.
@geopet85 did you figure out ? Can this be closed ?
|
GITHUB_ARCHIVE
|
Compiled by | Lin Ming, Nuclear Cola
Netizen Review: First, let’s fire all of Baidu’s programmers.
- Robin Li: The job title “Programmer” will no longer exist in the future
In an interview on CCTV’s “Dialogue” · New Year’s Talk on March 9, Baidu founder, chairman, and CEO Robin Li stated, “Basically, there will no longer be a job called ‘programmer’ because everyone who can talk will have the ability of a programmer. The programming languages of the future will remain only two: English and Chinese, the two languages currently leading in artificial intelligence technology.”
Some netizens agreed with Robin Li’s point of view, saying it is a trend of the future, and the future development must be intelligent. “In the future, the difficulty of programming, the difficulty of writing code, will definitely continue to decrease.” “The appearance of big models has already lowered the threshold for programmers, and the threshold will definitely become lower and lower,” “The profession of programmer will evolve to a higher level.”
Others shared different opinions: “Low-end programmers will disappear, and creative programmers will flourish,” “Learning to program is still a basic requirement, and no amount of creativity can compensate if one can’t understand programs.” Some netizens even joked, “First, let’s fire all Baidu’s programmers.” A more pessimistic netizen commented, “Do programmers still need AI to phase them out? Once you turn 35, you can’t find a job.”
On March 10, Zhou Hongyi tweeted in refutation: “Will big models replace programmers? Isn’t programming necessary in the future? I believe that the enthusiasm for programming will not subside within the next ten years.”
In the interview, Robin Li also discussed the speed of artificial intelligence development and suggested, “I feel that the development of artificial intelligence is slower than I imagined. AI has been proposed for nearly 70 years now, and every ten years or so, everyone says that we’re finally about to achieve general artificial intelligence. But in reality, the situation is more complex than imagined.”
- Stability AI CEO: human programmers will completely disappear within the next five years
Just like Robin Li, other technology experts share similar views.
Fixie co-founder and CEO, and former head engineer of Google Chrome’s mobile team, Matt Welsh, stated, “The programming job may no longer exist in three to five years, and even the discipline of programming may end.” Known for establishing a start-up company to prove his theory, Welsh believes that due to technologies like ChatGPT and Copilot, programming is at a turning point from a human job to a robot job. In his opinion, programmers need to evolve into AI program “teachers”—or product managers, or code reviewers, roles that are less affected by robots.
Stability AI founder and CEO, Emad Mostaque, predicted in a technology podcast episode: “Within five years, human programmers will completely disappear.” Stability AI is the developer behind the world’s most popular open-source image generator, Stable Diffusion, and their projects run across a wide spectrum, like protein folding structure prediction, DNA analysis and chemical reaction modelling, language modelling and even audio-visual data processing.
Highlighting the grim future for human programmers, Mostaque points to ample evidence supporting his point of view. According to GitHub’s statistical data, “41% of all current code is generated by AI.” Mostaque further added that what’s more interesting is that “our project surpassed Bitcoin and Ethereum on GitHub in just three months, quickly setting off a new trend,” demonstrating AI’s stronger public acceptance compared to cryptocurrencies.
Looking ahead to the near future, Mostaque anticipates breakthrough changes in the way humans obtain information and communicate. He predicts that by the end of 2024, everyone will have installed ChatGPT on their phones, running it offline without depending on the internet. And with AI models residing entirely on mobile, “our conversational interaction experience will undergo a fundamental change.”
- Where will programmers head in the future?
Even though tech giants have differing perspectives on the future development of the programming profession, they all agree that the AI wave will bring about major changes.
Welsh predicts that when programmers start to be phased out, only two roles will persist: product managers and code reviewers.
According to Welsh, the role of a product manager won’t change dramatically. “Human product managers will still be able to write English descriptions telling what the software should do, i.e., the Product Requirement Document (PRD). It’s just that instead of handing over the PRD to the engineering team and waiting for about six weeks for them to deliver, you just need to give the PRD to the AI, and it will spit out code in a few seconds.”
Those proficient in programming will take up the role of reviewing and reading AI-generated code to ensure it operates as expected. For current programmers and those about to enter the field, they will now need to become teachers to AI, not programmers themselves. Welsh emphasized, “It’s about teaching AI to write code, not doing it yourself.”
Tencent Tech Lead, Ru Bingcheng, believes that engineers need to focus on business understanding, requirement breakdown, architectural design, and making design trade-offs, and based on this foundation, learn to cooperate with AI through prompts, so as to achieve the “engineer + LLM” equals effect greater than 2. That’s synergy. To better coexist with AI in the future, programmers need to reinforce their abilities in the following three aspects:
- Understanding, analyzing, and breaking down requirements.
- Designing and analyzing architecture, making trade-offs in design, and promoting documentation and standardization of design
- Understanding the essence of the problem, rather than merely learning its application (better to teach a man to fish than to give him fish).
Microsoft Copilot Generates Violent Pornographic Images and Refuses to Amend, Desperate Internal Engineers Report to the Government!
Otterman Acquitted and Returns to the Board! Google Chinese Engineer Arrested: Claims His Ability is “Only 10 Globally”; USA Demands Bytedance to Divest TikTok within Six Months | Q Information
Google: Does Not Recommend Minors to Touch C++, Too Dangerous! Yann LeCun and Musk Laughed
Musk Responds Latest: OpenAI’s “Mail Attack” is Lying! Snowden supports: OpenAI’s actions are anti-human!
|
OPCFW_CODE
|
We’ve been designing and deploying micro-services with REST APIs for a while now, using API-First design. Time to document some of the lessons we’ve learnt during that process. They’re not presented in any particular order and they relate to various parts of the development lifecycle.
How to do paging
If a list query (say GET /members?start=0&count=100) could return a total of, say, 15.000 results, you need to indicate this in the response (so the client can show a paging control).
Our early APIs returned an “envelope” object, containing the list of 100 members plus the total count. This is OK, but it forces the client to implement an extra class for all list queries, whereas what he really expects is an array of members.
This can be achieved by putting the total count in a response header (X-Total-Count is often used) and returning a pure list of members. Note that even if you don’t expect a list to grow long, it’s a good idea to provide paging anyway for consistency.
How to handle long running tasks
Users of your API will expect immediate responses, you should not leave them waiting 20 seconds for an operation to complete (it’s likely to timeout on the client side). If the underlying operation might take longer than, say, one second, you should run it in the background and return a task id and provide an additional method to get the status of the task via its id (and to cancel it, if possible).
How to handle versioning of your API
The simplest way to handle versioning is simply to introduce the version into the path of your API, e.g. GET /items, GET /v2/items. This is a common pattern and users will accept it. However, you shouldn’t mix versions, so documentation and playground for older versions should only be available via an “previous versions” link.
Naming in REST APIs
Naming conventions are an important part of making your API easy to use. A number of de facto naming conventions have emerged which you can and should follow.
Objects should, wherever possible, have an id property which identifies them uniquely.
list vs objects
Use plural names for lists, singular for objects:
GET /items?start=20&count=10 will be expected to return a list of up to 10 items starting at item 20.
GET /item/2 will be expected to return an item with id=2.
Since REST APIs should be stateless, every API call should be authenticated separately.
REST APIs can use any authentication method supported by http(s). However, most APIs these days use an API key (a GUID like 7b515391-79e3-4857-92b1-9f7d0f099fcd). The most unobtrusive way to pass the API key to the server is to use a request header (called something like api-key). You can also use a query parameter (e.g. GET /items?api-key=7b515391-79e3-4857-92b1-9f7d0f099fcd). For ease of use (especially from a browser), you should support both styles.
An API alone is not enough
These days, people expect a lot from APIs. As well as a great API, you’ll also need online portal with documentation, code-samples in multiple languages and a sandbox/playground where people can try out your APIs in a production-near manner.
Which tools to use
We use Swagger/OpenAPI to develop the API and then we wire it to a back-end (Java or Python) using OpenAPI java or python inflectors1. We use Redoc for online documentation of REST APIs. Postman is indispensable for ad hoc testing during development and curl is a useful simple command-line API client. We use HAProxy for load-balancing and Letsencrypt for SSL certs. We use Docker for deployment (not just for the server, but also for all infrastructure components like SSL termination, proxying etc).
1. You’ll often see people generating a REST API from a server implementation. At first glance, this seems easier than the API-First approach we use, but it’s the wrong approach because your API must be pristine and if you generate it from an implementation, you will inevitably bleed some implementation details into the API.
|
OPCFW_CODE
|
[sup-talk] viewing threads oldest-first
wmorgan-sup at masanjin.net
Fri Mar 14 14:30:58 EDT 2008
Reformatted excerpts from Matt Liggett's message of 2008-03-14:
> In every mailer I've used before my gmail days, I always viewed my
> email oldest first, especially in any inboxen. Is there a config
> option (I don't see a keyboard shortcut) to do this?
There's no option for this. I don't think it makes quite as much sense
in a world where mailboxes can be arbitrarily large (a Sup mailbox is a
search result across your entire set of email) and where loading is
But there's no technical reason why this couldn't happen, and other
people have asked for it before, so I'd be willing to add it.
> As I write this, I realize that oldest-first might mean different
> things in a threaded view. Is that the oldest first message in the
> thread, or the oldest last message?
Currently a thread's timestamp is the timestamp of the most recent
message in it, and I would be loathe to change that.
> Do I generally want inbox threads sorted by unread messages (that is,
> ordering by oldest unread message in each thread)?
That would be interesting, but unfortunately is not feasible in Sup. The
index currently knows nothing about threads, just individual messages,
so we can only sort by properties of messages, not by properties of
The way we build a threadset now is we search for messages which meet
some search criterea, then we build a thread for each message.
> In the meantime, can anyone give me a pointer as to where in the code
> I might start digging regarding such a feature?
Normally I would want a hook, but since there are really only two
options that are technically feasible now, I would add a config variable
called "sort_oldest_first" or something like that. It will follow a
very similar codepath to thread_by_subj (so grep for that): it should be
initialized in sup.rb for new configurations, passed into a ThreadSet
constructor from ThreadIndexMode#initialize_threads, and passed into
Index#each_id_by_date, where it will change the query we send to Ferret.
Finally, ThreadIndexMode#update (which handles the UI component of
sorting) has got to respect this variable.
> I tried mutt+mairix for awhile, but it was kludgy and slow. sup looks
> like it might be the best of both worlds, enabling me to deal with a
> large volume of mail and still giving me the tagging and search that I
> love from gmail.
Yep, you're my target audience. :)
William <wmorgan-sup at masanjin.net>
More information about the sup-talk
|
OPCFW_CODE
|
[Question] Server side node_modules folder
I have used generator-aspnetcore-spa templates for Angular 2 to develop a website. Locally this works fine, F5 in VS debug in the website. But when I deploy this website to Azure through continuous deliver I get the following exception:
Exception: Call to Node module failed with error: To use prerendering, you must install the 'aspnet-prerendering' NPM package.
Error also happened in #144, but locally that works fine for me. My issue is a little different.
I am assuming the error is shown because node_modules is not copied into my package. I didn't include this in my source control (based on best practice advise) and I deploy through VSTS continuous delivery. So I could include node_modules in my source control and be done with it, but I am trying to understand what the correct approach is here.
On the client side webpack bundles all vendor.js which are used and sends those to the client. Is a similar approach possible for node on the server side? Or do I really need the full node_modules? If so, should I restore them during the build? With the command npm install? Or indeed add them to source control? Adding node_modules would make the package a lot bigger, but maybe there is no other solution? Maybe someone else has an idea about this?
I'm deploying to Azure as well and I don't include the node_modules folder. Make sure aspnet-prerendering is under "dependencies" and not "devDependencies". You can also ftp into the server, browse the file system and check exactly what was deployed.
I checked and aspnet-prerendering is under "dependencies", not "devDependencies". So that is not the problem. But your response made me curious so I decided to test my own theory and check-in the node_modules folder to see.
The error is now gone, but I run in another error:
ERROR in<EMAIL_ADDRESS>(13,24): error TS2307: Cannot find module './src/debug/debug_renderer'.
This error (and more debug.. specific errors) already occurs during dotnet publish, but locally a dotnet publish -c release works fine, so I have to figure out what goes wrong on the VSTS build agent. But I will look at that tomorrow.
I am not sure what this means, but it looks like this is another problem and the aspnet-prerendering problem was solved by adding node_modules, but now I am wondering why it is working in your website? Do you know where your server app loads the required javascript?
I'm using react and ES6 instead of typescript so won't comment on the new error but our projects will install in the same way. Look in your project.json and your will something like mine below. When azure publishes your project it will run these commands - these are what installs the node packages, builds the project using webpack, etc.
"scripts": {
"prepublish": [
"npm install",
"node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js",
"node node_modules/webpack/bin/webpack.js"
],
"postpublish": [ "dotnet publish-iis --publish-folder %publish:OutputPath% --framework %publish:FullTargetFramework%" ]
},
I suspect the difference is that @chris-herring might be deploying from Git (a.k.a. via Azure's "Kudu" deployment mechanism), which automatically does run those scripts during the deployment, whereas @eriksteinebach is deploying via VSTS, which is more like just directly FTPing your local folder to the server without running any deployment scripts on the server.
Personally I've mostly used Kudu deployment where everything just works cleanly without needing node_modules in source control. Also it's massively faster, because you don't have to ship literally tens of thousands of files from node_modules from your machine to the Azure server (the server will fetch its own copies of the NPM modules).
When I tried VSTS-based deployment, it did work fine (albeit very slow), but then I did allow it to include the node_modules. You certainly will have to do that, because otherwise the server will not have those files - VSTS deployment isn't going to run any deployment scripts on the server for you.
ERROR in<EMAIL_ADDRESS>(13,24): error TS2307: Cannot find module './src/debug/debug_renderer'.
I'm not sure why you're getting that. It didn't happen when I tried it, but then maybe I was using a different version of Angular 2 or TypeScript than you are doing. I can't investigate that today, but please let us know if you manage to track it down. If this becomes an ongoing problem for you please let me know and I'll try to schedule some time to see if I can investigate and repro that issue.
I can confirm I am using Kudu and deploying from gitlab. Was fairly easy to setup and works nicely. Thanks for the explanation on the VSTS/Kudu differences.
Thank you, that makes a lot of sense and explains it. I didn't realize there was a difference between those 2 ways. I am going to have a look into this Kudu deployment mechanism.
One question remains for me, those pre and post publish scripts are executed to populate the node_module folder? And those are run on the webserver? So npm is available on Azure App Service machines?
I also figured out the problem with the debug files. My .gitignore contained [Dd]ebug/ so ignored 2 folders in node_modules. I changed it to [Bb]in/[Dd]ebug/ because I figure that covers most and at the moment there are no other debug folders. An extra unrelated question: there is no way to unignore a folder after it has been ignored, even if the unignore is more specific right?
One question remains for me, those pre and post publish scripts are executed to populate the node_module folder? And those are run on the webserver? So npm is available on Azure App Service machines?
Yes, yes, and yes :)
I also figured out the problem with the debug files. My .gitignore contained [Dd]ebug/ so ignored 2 folders in node_modules. I changed it to [Bb]in/[Dd]ebug/
If you use a leading slash (e.g., /[Bb]in) then it will only match the top-level one, and you don't have to worry about clashes inside node_modules etc.
An extra unrelated question: there is no way to unignore a folder after it has been ignored, even if the unignore is more specific right?
Yes, there is, but I don't remember the syntax. Try searching for "gitignore exclude" or similar.
I'll close this because it sounds like you resolved everything.
|
GITHUB_ARCHIVE
|
Unable to edit review audits
I just had a review audit with a very bad grammar. Everything like i or sentence beginnings was lowercased, so I decided to edit the question. As I clicked on the edit link below the question, I get redirected to a page that says that the post was deleted and can't be edited. Then I clicked on the link to get to the real question and it was also deleted.
What should I do with those review audits? It's impossible to handle it like a normal question where I should have edited it. Also, if I notice that I have a review audit instead of a normal review, why should I continue? Shouldn't review audits be questions or answers that still exist so that you can edit it (the edit needn't affect the question) and that you don't notice that you have a review audit in front of you?
I thought a review audit was a closed or deleted answer or question, that is brought back up again to test you. So you wouldn't need to edit it. So just decide on it like you would a normal question. (I don't have enough rep to know for sure how it works, but this is what I've picked up.)
@DonyorM Yes, on a normal question I would have edited it
Is a review audit actually just a set-up test that happens while review flags?
@DonyorM A rewiev audit is a something that looks like a normal review but is a test if you take care while reviewing - so I should be able to handle it like a normal question
I read another post about this, and I'm pretty sure that review audits are old questions, either closed or deleted, that are randomly selected to be shown as a test. So you can't actually edit them because they no longer exist. They're just a test. Then again I could be wrong, I'm still too low of rep.
@DonyorM Thats correct, but I should treat a review audit as a normal review - and I can't do that if I can't edit it
So did you fail or pass the audit? If you passed the audit, then there's no problem - you told the review system that the post needed work, but since the post had already been deleted, that would be pointless.
@SLBarth I skipped the review
So you're in the review queue, and you find a post that is so poorly phrased, that you need to edit it first.... but it turns out the post is already deleted.
So, apparently the post is an audit. You have shown that you were paying attention, so click "Recommend Deletion" and continue.
The system could let you edit the whole post first, but that would be a waste of your time - the post is already deleted.
Review audits are deliberately taken from the pool of deleted posts, because these are considered known examples of bad posts. (To be fair, the system doesn't always choose posts that are appropriate for reviews, but that's another discussion - check the disputed-review-audits tag if you want to know more).
On closing, choosing "Skip" like you did was also a good choice. When in doubt, skip.
Well, the only options I had were "No Action Needed" which would have leaded to fail the audit, or "Skip"
In the First Posts queue, you can upvote or downvote. Normally you'd want to edit a post into shape before passing judgment, but in this case you've already found that it was a review, so downvoting is ok here.
|
STACK_EXCHANGE
|
Handy Rounding to the Nearest 1/2 Calculator gives the rounded to the nearest half value for the provided input in a fraction of seconds. All you have to do is just enter the decimal number in the mentioned input box and hit the calculate button to check the exact answer with steps in the output section.
Round off to the Nearest 1/2 Calculator: Do you require any assistance to find the rounding of a number to the closest ½? Then read this complete page. It provides useful information like what is meant by rounding decimal numbers to the nearest ½, what are the detailed steps to round off to the nearest half in the following sections. You can also avail the example questions with solutions on rounding numbers to the nearest fraction 1/2.
Rounding decimals to the nearest 1/2 means estimating the closest half value of the decimals. Estimating doesn't give the original value but it displays the nearest value for the original one. This process of rounding to the nearest ½ is helpful while performing math calculations between larger numbers.
Example: Rounding of 5.689 to the nearest 1/2 is 5½.
Go through the detailed steps to round off numbers to the nearest ½. Check these rules and follow these to get the result quickly.
Question 1: What is 56.896 rounded to the nearest 1/2?
Given number is 56.896
The fractional part is 0.896. It is more than 1/2, so round up the number.
Therefore, 56.896 rounded to the nearest ½ is 56.896.
Question 2: What is 1256.260 rounded to the nearest 1/2?
Given number is 1256.260
The fractional part of the number is 0.260
The number fractional part is near to 0.5
0.5 can be represented as fraction is ½
So, 1256.260 rounded to the nearest ½ is 1256½.
If you wanted to know about more fractional or decimal rounding calculators, then check Roundingcalculator.guru that gives the instant results with explanation for better understanding of the concept.
1. How to round decimals to the nearest half?
To round off decimals to the nearest ½, you need to round the fractional part of the decimal to the nearest 1/2. Then write the rounded value.
2. Is 1/2 rounded up or down?
In general, 1/2 is rounded up. In some situations, 1/2 or 0.5 can be rounded down.
3. How to round off numbers to the nearest ½ on a calculator?
To round decimal numbers to the nearest half on a calculator, you have to follow these instructions. Give the decimal number as input and tap on the calculate button to avail the result within seconds.
4. What is 32.67 rounded to the nearest half?
Given number is 32.67
The fractional part is 0.67
This value is near to 0.5
And, 0.5 in fraction of 2 is equivalent to ½
So, 32.67 rounded to nearest ½ is 32½.
|
OPCFW_CODE
|
#!python3
import os
import argparse
import numpy as np
import pandas as pd
from glob import glob
from ete3 import Tree
from plot_module import plot_correlation, plot_tree, to_float
def remove_units(str):
return str.replace("(g)", "").replace("(days)", "").replace("(yrs)", "").replace("(kg)", "").replace("(cm)", "")
def plot_trees_from_traces(input_trace, output_plot, simu_dict, simu_tree):
axis_trees, axis_filenames = dict(), dict()
for filepath in input_trace:
for tree_path in sorted(glob("{0}.*.nhx".format(filepath))):
feature = remove_units(tree_path.replace(filepath + ".", "").replace(".nhx", ""))
filename = os.path.basename(filepath)
with open(tree_path, 'r') as tree_file:
tree_str = remove_units(tree_file.readline())
if tree_str.count("-nan") > 0:
continue
tree = Tree(tree_str, format=1)
if simu_tree:
for n_inf, n_simu in zip(tree.traverse(), simu_tree.traverse()):
assert (sorted(n_simu.get_leaf_names()) == sorted(n_inf.get_leaf_names()))
if feature not in axis_trees:
axis_trees[feature] = []
if feature not in axis_filenames:
axis_filenames[feature] = []
axis_filenames[feature].append(filename)
axis_trees[feature].append(tree)
if len([n for n in tree.traverse() if feature in n.features]) == len(list(tree.traverse())):
plot_tree(tree.copy(), feature, "{0}/{1}.{2}.pdf".format(output_plot, filename, feature))
for feature in axis_trees:
axis_dict, err_dict, std_dict = dict(), dict(), dict()
if feature in simu_dict:
axis_dict["Simulation"] = simu_dict[feature]
for filename, tree in zip(axis_filenames[feature], axis_trees[feature]):
values = np.array([to_float(getattr(n, feature)) for n in tree.traverse() if feature in n.features])
min_values = np.array(
[to_float(getattr(n, feature + "_min")) for n in tree.traverse() if feature + "_min" in n.features])
max_values = np.array(
[to_float(getattr(n, feature + "_max")) for n in tree.traverse() if feature + "_max" in n.features])
axis_dict[filename] = values
err_dict[filename] = np.vstack((np.abs(values - min_values), np.abs(max_values - values)))
std_dict[filename] = np.array(
[to_float(getattr(n, feature + "_std")) for n in tree.traverse() if feature + "_std" in n.features])
if len(axis_dict) > 1:
path = '{0}/correlation.{1}.pdf'.format(output_plot, feature)
plot_correlation(path, axis_dict, err_dict, std_dict=std_dict, global_min_max=False)
def open_simulation(input_simu):
simu_dict = dict()
simu_params = {k: v[0] for k, v in pd.read_csv(input_simu + '.parameters.tsv', sep='\t').items()}
t = Tree(input_simu + ".nhx", format=1)
root_pop_size = float(t.population_size)
simu_dict["LogPopulationSize"] = [np.log(float(n.population_size) / root_pop_size) for n in t.traverse()]
simu_dict["LogOmega"] = [np.log(float(n.population_size) / root_pop_size) for n in t.traverse()]
root_age = simu_params["tree_max_distance_to_root_in_year"]
simu_dict["ContrastPopulationSize"] = [
(np.log(float(n.population_size)) - np.log(float(n.up.population_size))) / np.sqrt(
n.get_distance(n.up) / root_age) for n in t.traverse() if not n.is_root()]
if "population_size" in simu_params:
simu_dict["LogMutationRatePerGeneration"] = [np.log(float(n.mutation_rate_per_generation) * root_age) for n in
t.traverse()]
simu_dict["LogGenerationTime"] = [np.log(float(n.generation_time)) for n in t.traverse()]
simu_dict["LogMutationRatePerTime"] = [
np.log(root_age * float(n.mutation_rate_per_generation) / float(n.generation_time)) for n in t.traverse()]
simu_dict["Log10Theta"] = [np.log10(4 * float(n.mutation_rate_per_generation) * float(n.population_size)) for n
in
t.traverse() if n.is_leaf()]
else:
simu_dict["LogMutationRatePerTime"] = [np.log(float(n.mutation_rate) * root_age) for n in t.traverse()]
simu_dict["BranchTime"] = [n.get_distance(n.up) / root_age for n in t.traverse() if not n.is_root()]
simu_dict["Log10BranchLength"] = [
np.log10(n.get_distance(n.up) * float(n.Branch_mutation_rate_per_generation) / float(n.Branch_generation_time))
for n in t.traverse() if not n.is_root()]
return simu_dict, t
def open_tsv_population_size(tree_file, tsv_file):
t = Tree(tree_file, format=1)
csv = pd.read_csv(tsv_file, header=None, sep='\t')
for index, (leaf_1, leaf_2, _, ne, _) in csv.iterrows():
if leaf_1 == leaf_2:
leaves = t.get_leaves_by_name(leaf_1)
assert (len(leaves) == 1)
n = leaves[0]
else:
n = t.get_common_ancestor([leaf_1, leaf_2])
n.pop_size = ne
pop_size_dict = dict()
root_pop_size = float(t.pop_size)
pop_size_dict["LogPopulationSize"] = [np.log(float(n.pop_size) / root_pop_size) for n in t.traverse()]
return pop_size_dict, t
if __name__ == '__main__':
parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('-s', '--simulation', required=False, default="", type=str, dest="simulation")
parser.add_argument('-o', '--output', required=True, type=str, dest="output")
parser.add_argument('-t', '--trace', required=True, type=str, nargs='+', dest="trace")
parser.add_argument('--tree', required=False, default="", type=str, dest="tree")
parser.add_argument('--tsv', required=False, default="", type=str, dest="tsv")
args = parser.parse_args()
if args.simulation != "":
simu, simu_tree = open_simulation(args.simulation)
plot_trees_from_traces(args.trace, args.output, simu, simu_tree)
elif args.tree != "" and args.tsv != "":
pop_size, input_tree = open_tsv_population_size(args.tree, args.tsv)
plot_trees_from_traces(args.trace, args.output, pop_size, input_tree)
else:
plot_trees_from_traces(args.trace, args.output, {}, False)
|
STACK_EDU
|
Introduction and Background
Luis Ibáñez, Matt McCormick, Jean-Christophe Fillion-Robin, and Aashish Chaudhary attended the Scientific Computing with Python (SciPy) 2014 conference in Austin, Texas, between July 6th – July 12th. This year’s conference was again the largest ever, with registration reaching its cap at over 450 attendants, a 50% increase over last year. The main conference was extended from two to three days, and it was again preceded by two days of tutorials and followed by two days of sprints.
Themes of this year’s conference were Geospatial Data in Science Track and Scientific Computing in Education. There was also general scientific Python content in addition to the domain specific mini-symposia.
Kitware was activily involved in the conference. Not only was Kitware a sponsor, but Luis, Matt, JC, and Aashish taught a tutorial on "Reproducible Science – Walking the Walk." In addition, Matt was a Birds of a Feather (BoF) Committee Chair, on the Program Committee, and Chair of the Vision, Visualization, and Imaging Mini-Symposium. JC presented a poster on CMake-ified CPython, and Aashish presented a talk on ClimatePipes and OpenClimateGIS, as well as a poster on UVis. Matt and Luis were co-authors on a SimpleITK talk. We also participated in BoF sessions, the software sustainability workshop, and sprints.
Luis, Matt, JC, and Aashish taught a tutorial on Reproducible Science – Walking the Walk. We co-taught the tutorial with Ana Nelson from dexy and Steve Smith from GitHub. The hands-on tutorial focused on training reproducible research warriors on the practices and tools that make experimental verification possible with an end-to-end data analysis workflow. Also, the the tutorial exposed attendees to open science methods during data gathering, storage, and analysis up to publication into a reproducible article.
In honor of one of the historical reproducible research participants, Antonie van Leeuwenhoek, we started by collecting images of tardigrades by transforming participants’ cellphones into microscopes with a drop of water. These lovable creatures, invisible to the naked eye, were discovered after magnified.
The images collected were uploaded to an open access data sharing service, and an exercise on how to retrieve the images via a REST API was completed via IPython Notebook. Next, we discussed how to set up a reproducible computation environment and a Docker image to analyze the images. The images were processed with IPython Notebooks and SimpleITK, and best coding practices were addressed such as methods for code re-usability. Next, version control via Git and GitHub and unique identifiers were discussed. The versioned scripts were instrumented with regression tests, and participants ran the example repository test suite. The data and analysis results were integrated into a narrative via the literate programming tool, Dexy, and the Docker container. Finally, the entire work was submitted to an open science publication, the Insight Journal.
Tardigarde (above moss clump) uncovered by a cell phone camera microscope.
We had over 60 enthusiastic participants, who worked in pairs. Following the tutorial, we had lunch with Fernando Perez and Brian Granger, of IPython fame, who gave us some great feedback. We published the tutorial repository on Zenodo to obtain a citable DOI for our work, which includes links to supplemental material including the website, videos, and data.
Full house of participants at the reproducible research tutorial.
We also attend excellent tutorials covering topics like the new interactive widgets in the IPython notebook and teaching techniques with the IPython notebook. All tutorials have videos online and most have GitHub repositories where their material is hosted.
|
OPCFW_CODE
|
If you want to learn the R programming language, there are many ways you can go about it.
You can find a variety of resources online that can help you get started, including tutorials, videos, and online courses. In this post, we’ll explore some of the best ways to learn R online.
Table of Contents
Learning R – A Student’s Guide
Why learn R?
R is a programming language that is popular among statisticians and data scientists.
It is a powerful tool for data analysis and statistical computing. There are many reasons to learn R.
R is a free and open source software. This means that anyone can use and contribute to the development of R. R is also available on all major operating systems, so you can use it regardless of your platform of choice.
R is a highly versatile language. It can be used for data analysis, statistical computing, and machine learning.
R also has a large and active community of users, who have contributed a wealth of packages and tools to the language.
R is also great language for learning statistical programming and data science. It is concise and easy to read, and there is a wealth of online resources available to help you learn R.
What is R?
R is a programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing.
The R language is widely used among statisticians and data miners for developing statistical software and data analysis.
Polls, surveys of data miners, and studies of scholarly literature databases show that R’s popularity has increased substantially in recent years.
How can I learn R?
There are plenty of ways to learn R, and it really depends on your level of experience and comfort with programming. If you’re just starting out, then we recommend checking out some of the courses we recomment above.
These will help you get started with the basics of the language, and will allow you to start writing your own programs in R. If you’re already familiar with another programming language, then learning R will be a breeze.
One of the best ways to learn R is by doing. There are plenty of online resources that will allow you to write and run your own R programs.
This is a great way to get a feel for the language, and to see how it works in action. Another great way to learn R is by working with real data.
- This can be done in two ways. First, you can find datasets online that you can use to practice your skills.
- Second, you can work with data that you already have, such as data from your own research.
This is a great way to learn about the language, and to see how it can be used to solve real-world problems.
Finally, we recommend taking some time to explore the community around R. There are plenty of forums, mailing lists, and IRC channels where you can ask questions and get help from other R users.
This is a great way to learn about the language, and to find out what other people are doing with it.
What are some example applications of R?
R is a programming language that is commonly used for statistical computing and data analysis. There are many different applications for R, some of which are listed below.
- R can be used for exploratory data analysis, which involves manipulating and visualizing data to gain insights and understanding about the dataset.
- R can be used for predictive modeling, which involves using statistical methods to build models that can predict future outcomes based on data.
- R can be used for statistical testing, which involves using statistical methods to test hypotheses about data.
- R can also be used for data visualization and maps, which involves creating visualizations to communicate data and insights.
Frequently Asked Questions
How can I learn R programming online?
There are a variety of ways that you can learn R programming online. You can find several free tutorials and lessons online, or you can choose to enroll in a more comprehensive course. Whichever route you choose, make sure that you find a reputable source of information so that you can learn the language correctly.
Can I learn R in 3 months?
You can learn the basics of R in three months, but to become proficient in R can take years.
How can I learn R language for free?
There are many ways to learn R language for free. You can find tutorials and videos on youtube, or join forums and discussion groups.
Can I teach myself R programming?
Yes, it is possible to teach oneself R programming. However, it may be difficult to find resources and support if one does not have any previous experience with coding.
Is R programming language easy to learn?
The answer to this question is both yes and no. R programming language can be easy to learn for some people and more difficult for others.
Is R or Python easier to learn?
Neither language is inherently easier or harder to learn; this varies depending on the person.
What is R used for in programming?
R is a programming language and software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing.
Which is better R or Python?
It depends on what you mean by “better.” If you are looking for a language that is easier to learn, Python might be a better choice. If you are looking for a language that is more popular, R might be a better choice.
Is R same as Python?
No, R is not the same as Python.
What type of language is R programming?
R is a high-level, interpreted, and general-purpose programming language.
Overall, learning R online can be a rewarding experience. With the right courses and motivation, anyone can learn this powerful programming language.
While there are some challenges, such as finding good quality materials and staying motivated, these can be overcome with perseverance.
The benefits of learning R, such as the ability to perform sophisticated data analysis and the potential for career advancement, make it worth the effort.
Did you take one of the R courses recommended in this article, or do you have another one to recommend? please let us know in the comments below.
- R-programming course with Coursera – Johns Hopkins University
- Tutorials Point – R programming
- R Programming course at Pluralsight
- R Programming Fundamentals – Stanford
- R programming basics – guru99
- Learn the basics of R with freecodecamp’s youtube video
- R Tutorial – W3School
- R Programming lessons from the University of Arizona
- Gentleman, Robert. R programming for bioinformatics. Chapman and Hall/CRC, 2008.
- Matloff, Norman. The art of R programming: A tour of statistical software design. No Starch Press, 2011.
- Chambers, John M. Software for data analysis: programming with R. Vol. 2. New York: Springer, 2008.
- Kaya, Efdal, et al. “Spatial data analysis with R programming for environment.” Human and ecological risk assessment: An International Journal 25.6 (2019): 1521-1530.
|
OPCFW_CODE
|
Before you start delving into the practical aspects, it is essential to understand the concept of code coverage. If you are unfamiliar with coverage, we recommend referring to our first article on the subject.
The latest update, version 0.6.13 of Ape and version 0.6.9 of ape-vyper, introduces the coverage support exclusively for Vyper contracts. However, since Ape is a plugin based framework, this system facilitates adding coverage support for other languages, such as Solidity and Cairo.
So without further ado, lets dive in:
In this article, we are going to employ a Token contract as an illustrative example to showcase the functionality of coverage. You can follow this link here for the repository.
Utilising coverage functionality within the Ape framework is straightforward. You can initiate coverage by including the `--coverage` option when running `ape test`. This will generate a comprehensive coverage report. It is worth noting that certain types of coverage will require a provider plugin that supports transaction tracing such as `ape-hardhat` or `ape-foundry`.
This is our token contract for this example:
Currently we have not set any tests for this contract so our coverage will show 0%.
Ape shows the name, statements (Stmts), misses (Miss), coverage (Cover), and functions (Funcs). This is modelled after coverage-py and pytest-cov style reports.
Name: Name of the source identifier
Statements (Stmts): Amount of statements present in each contract
Misses (Miss): Amount of statements not covered in tests
Coverage (Cover): Percentage of the contract that is covered by tests
Functions (Funcs): Percentage of functions/methods that are tested in each contract
A statement is either a group of values that have the same line numbers in a source file or it is a value with an extra tag to identify builtin logic that is injected by the compiler (such as the default or fallback method).
The coverage report is normally generated in the terminal but you can also generate the coverage reports as an external XML or HTML files by configuring them in the `ape-config.yaml` like:
You can also see a more verbose coverage report by setting the field `terminal` or `html` to include `verbose: True`. Verbose outputs are useful when you are trying to find the missing areas to cover. This will break down each contract to show each function and how well the functions are covered individually.
This will result in the following output for our contract `TestToken.vy`:
The `_builtin_` functions listed under your contract coverage functions are the functionalities that the compiler uses internally that are not in the source code, such as non-payable checks, safe-math checks, and the not-implemented contract default method check. Tests for these conditions are necessary to get to 100% coverage
Like gas reporting, you can also exclude contracts and methods from tracking coverage using your `ape-config.yaml` file.
You may want to use exclusions for methods or contracts that you don’t plan on pushing to production but are still using for debug or testing purposes.
We will be excluding the `DEBUG_mint` function from testing.
Now let’s try and add some coverage for our `TestToken.vy` contract.
Some methods will show zero statements. One example with a zero statement may be from an auto-generated getter method for a public variable: certain versions of Vyper do not contain source mappings for these methods. However, Ape will still check to see if this method has been called in your tests. To get 100% coverage, you must call these methods in your tests.
In this case, these will be methods like `name()`, `symbol()`, `decimals()`, `totalSupply()`, `balanceOf()`, and `allowance()`. A simple example of tests that you can make for these are:
By just adding this test, the coverage report for the TestToken contract will now show:
So now all we need to do is to add tests for the other functions: `approve()`, `transfer()` and `transferFrom()`
By incorporating these small tests, our test coverage will now reflect that we have 100% coverage for these functions.
However, it is crucial to recognise that achieving full function coverage does not guarantee thoroughness in testing your contract. “Are these tests adequately comprehensive for my contract?” While the introduced test may offer substantial coverage, it is important to acknowledge that they may not encompass every possible scenario in which your contract may execute. Even with diligent testing and high code coverage, there is always room for enhancement. The journey towards code perfection is a continuous one.
Currently we only support statement coverage right now, if you were to add branch coverage, it would show less than 100% for total coverage.
One more thing to note is that some methods may have a full method signature while others do not. Methods that have the full signature mean that their short name is shared with other methods. This happens in Vyper from auto-generated kwarg-based methods, or in Solidity when overloaded functions are added. Thus, the full selector is used to distinguish the methods in the coverage (and gas) reports.
Remember in our previous article that we said “Code coverage metrics, as a percentage score, are really telling you very little about whether your code is doing the right thing.” Writing an abundance of tests is not the sole objective. Instead, it is crucial to focus on crafting tests that align closely with your specific objectives, ensuring optimal suitability and effectiveness. The coverage tool serves as a means to gauge the extent to which your code has been thoroughly exercised with these examples, or guide you towards areas that need more attention.
We trust that this article has provided valuable guidance on harnessing Ape's new coverage feature to prepare your contracts for production deployment. It is important to remember that while coverage is an invaluable tool, it should be supplemented with other methodologies to ensure the overall safety and security of your contract.
Consider complementing code coverage with techniques such as fuzzing, red teaming, and exploring unconventional attack vectors. By adopting a holistic approach to testing and security analysis, you can fortify your contract against potential vulnerabilities and mitigate risks effectively.
Embrace the power of comprehensive testing strategies and exploit the full potential of code coverage as a part of your robust development and deployment pipeline.
Thank you for reading and enjoy our coverage tool!
|
OPCFW_CODE
|
I cannot view my hdd
A couple of weeks ago, I was playing solitaire on my laptop (three year old Compaq with no previous problems)and my external hard drive was attached. I then received a message saying there was an unknown driver connected to my laptop and my HDD disappeared from My Computer. i have tried to uninstall it and reconnect. The HDD is visible in Device Manager but I cannot see it anywhere else and Device Manager states that it is working properly. The HDD is only 14 months old and the system I'm using is Windows Vista.
Give us as much info as you can about your laptop, make, model #, HDD, and the same for your removable drive. Is this drive powered by the USB cable or separate power block? If it has the choice to use an ext power supply, use that.
First thing to do is run a thorough virus scan to make sure this is not virus or malware related.
Then with removable drive attached, go to disk management, and select the removable drive, and uninstall it thru the context menu.
Then reboot and with the removable drive attached, let Windows 'find new hardware' and install the proper driver. If it's a driver problem that may correct it, if its a corrupted partition on the removable drive, it won't help but won't hurt.
My laptop is a compaq presario cq60 and the hard drive is a trekstor datastation maxi g.u. I have tried everything you have suggested and when the drive was reinstalled it states it is ready to use but is still not showing up in My Computer. In Disk Management it says the driver is not initialised but when I try to do this I get an error message saying it is an incorrect function; however in Properties it states the driver is working properly.
emsy_foley said:My laptop is a compaq presario cq60 and the hard drive is a trekstor datastation maxi g.u. I have tried everything you have suggested and when the drive was reinstalled it states it is ready to use but is still not showing up in My Computer. In Disk Management it says the driver is not initialised but when I try to do this I get an error message saying it is an incorrect function; however in Properties it states the driver is working properly.
The fact that it is seen in Device Manager means the OS recognizes the drive and the drive itself is probably OK.
The not initialized means there is no disk signature on the removable drive, which is incorporated in the Partition information in the MBR.
Do you have important data on this drive or not? Could you repartition and reformat the drive without much of a loss?
The next step is to use a Partition management program, to attempt to reestablish the Drive signature, or reestablish the Disk Partition.
Download and run the Easeus Partition Master free edition, to see if it can reestablish the Partition Information that may be corrupted. Here is the link.
Partition Manager, Disk & Partition Copy Wizard, Partition Recovery Wizard.
emsy_foley said:I use my drive as my main storage unit so all my photos, music and videos are stored on it. I would not be able to replace a lot of these documents. If I do as you suggest, will it wipe my drive?
Use should try the Easeus Partition Master to see if it can reestablish the partition information so you can access all your important files. It will not do anything destructive to the data on the drive.
If that is not successful, then you will need to use a data recovery program, that finds recognized file types on this drive, and moves them to a clean properly functioning HDD. To do that, you will need a separate clean partitioned and formatted HDD at least the size of this drive so that your data can be copeid over to it.
Try the Partition Master first.
emsy_foley said:I am trying to run Partition Manager on my external hard drive but I cannot view it on the disk list. What should I do?
If you can't see the partition on the removable hard drive, then you should use the Easeus Data Recovery Wizard. I am assuming it's still true you can see this drive in the Device Manager.
Here is the link
It can in many cases scan the drive, and repair the corrupted partition or redo the disk signature. Hopefully that will work.
If not, then you will need to use the "Data Recovery" part of that program to move your data to a functional HDD, as mentioned before.
The Data Recovery Wizard will recover 1 GB of data free, more than that you are requried to purchase the commercial version for $69.95.
Another option would be the Acronis Disk Director which is also commercial and very well regarded.
emsy_foley said:I already tried the data recovery and even though it recognises the drive, it is stating there are no files to select
These findings are not encouraging. It may well be that your files are not going to be recoverable.
Here is another fairly powerful data recovery application that is free and worth trying.
"Test Disk" by CGSecurity "Christophe Grenier". It was primarily designed to help recover lost partitions and/or make non-booting disks bootable again.
Here is the link to this program http://www.cgsecurity.org/wiki/TestDisk
|
OPCFW_CODE
|
I'm a -year-old progressing full stack developer, with a growing passion for fintech. Currently pursuing my Masters degree in Computer Science (Data Science Minor) at the University of North Carolina at Charlotte. I understood the importance of personal finance and investments at a very early stage in my life, urging my peers to gain a greater sense of financial well-being in this fast paced world. Being a very inquisitive person, I play around with any new tech that catches my interest in an effort to create something useful out of it. My most popular projects CryptoBOT and BlockScan are a use-case driven result of my passion for Software Development and Finance.
Performed quality assurance tasks to check whether the software meets the specifications listed in the acceptance criteria.
Performance tested the database to determine the responsiveness of the application under extreme conditions.
Optimized the debugging workflow by adopting the use of Power Bi to analyze services’ run statistics in detailed and simplified reports for the dashboard.
Aggregated various metrics from the ETL processes and built reports for further analysis.
Engineered the front-end UI flow for the entire PWA using React and TailwindCSS.
Developed a seamless user experience by using complex and secure state management and a rich dashboard to navigate the app
Integrated the back-end services built on Flask and SQLAlchemy with a redis caching layer for efficiency.
Managed both back-end and front-end aspects of the development process.
Python is an interpreted, high-level, general-purpose programming language.More info
Hypertext Markup Language is the standard markup language for documents designed to be displayed in a web browser.More info
Cascading Style Sheets is a style sheet language used for describing the presentation of a document written in a markup language like HTML.More info
Express.js, or simply Express, is a web application framework for Node.js, released as free and open-source software under the MIT License. It is designed for building web applications and APIsMore info
Next.js is an open-source React front-end development web framework created by Vercel that enables functionality such as server-side rendering and generating static websites for React based web applications.More info
Representational state transfer is a software architectural style that defines a set of constraints to be used for creating Web services. Web services that conform to the REST architectural style, called RESTful Web services, provide interoperability between computer systems on the Internet.More info
The Spring Framework is an application framework and inversion of control container for the Java platform. The framework's core features can be used by any Java application, but there are extensions for building web applications on top of the Java EEMore info
GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing dataMore info
SQLite is an open-source relational database management system for Structured Query LanguageMore info
MongoDB is a cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schema. MongoDB is developed by MongoDB Inc.More info
Peewee is a simple and small ORM used in python.More info
Sequelize is a promise-based Node.js ORM for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server. It features solid transaction support, relations, eager and lazy loading, read replication and more.More info
Hibernate ORM is an object–relational mapping tool for the Java programming language. It provides a framework for mapping an object-oriented domain model to a relational databaseMore info
A mongoose is a small terrestrial carnivorous mammal belonging to the family Herpestidae. This family is currently split into two subfamilies, the Herpestinae and the Mungotinae.More info
pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series.More info
SciPy is a free and open-source Python library used for scientific computing and technical computing.More info
NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arraysMore info
A Simple but elaborate visualization library.More info
Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.More info
Altair is a declarative statistical visualization library for Python, based on Vega and Vega-LiteMore info
A Bot which helps you monitor any crypto market live. It notifies you instantly about the market movements so that you don't miss any opportunity for a LONG/SHORT
A bot that scans for huge transactions across 6 Blockchain networks. Monitored live and updated instantly on twitter. Warns you about potential sell-offs or Whale Buys. [Notion related to Crypto Markets]
A website which lets you browse through the largest GIF database ever! Built with React.
A panel to manage and update the college monthly journal, with a sleek dashboard and administrative panel. (This is a prototype of the original version only)
|
OPCFW_CODE
|
Some people also call it the universal plot, while others simply call it basic some kind of trouble and does his best to get how to write a plot. Ever wondered how the bet follows the standard plot of most stories come on in and read all about it skip to navigation skip to content at the finish line. Basic plotting with python and matplotlib 1 line plots the basic syntax for creating line plots is pltplot(x,y), where x and y are arrays of the same length that. Get an answer for 'list and define the five parts of a plot' and find homework help for other guide to literary terms questions at enotes. I'm trying to do a scatter plot with a line of best fit in matlab, i can get a scatter plot using either scatter(x1,x2) or scatterplot(x1,x2) but the basic fitting. Full online text of the bet by anton chekhov novels with a complicated love plot, sensational and fantastic stories, and so on.
How to write a good plot the part where the basic plot becomes the good plot and personifications you'll get the best out of your plot 16. Read the best of me (2014) synopsis, storyline and movie plot summary on fandango. Teach plot of a story arc with plot diagram, narrative arc, & plot chart with storyboards you can alter the plot diagram to a more basic beginning-middle. The following is an introduction for producing simple graphs with the r programming language label axes with smaller font and use larger # line widths plot. Ggplot2 line plot : quick start guide - r software and data quick start guide - r software and data visualization # basic line plot with points. Line graphs and scatter plots the graphing tutorial gives specific instructions on creating scatter plots and regression lines line graphs can be.
Plot basic form of the story pick one of the 36 plots to use for your fleon is an ambitious general who desires to rule the city and establish a royal line. Code for line of best fit of a scatter plot in python how to overplot a line on a scatter plot in python 2 on fitting a curved line to a dataset in python.
36 plot ideas for your novel gives thirty-six basic plots for all sto ries you could substitute best friends in some of the kinsmen examples. A page for describing main: seven basic plots note: this page was cut for reason: main redirect to work [malady] creating red links in 0 articles. Graph a line - powered by this page will help you draw the graph of a line it assumes the basic equation of a line (1-4) that best matches the equation you.
R line graphs - learn r programming language with simple and easy examples starting from r installation, language basics, syntax, literals, data types, variables. The best way to do so one thought on “ crafting an effective plot for children’s books ” it is basic that we read blog entry painstakingly.
Hard to believe as it may be, we're down to the final two basic plots, but they're also the two most well knownthese two basic plot types make up the two halves of. Scatter and line plots in r how to create line and scatter plots in r examples of basic and advanced scatter plots, time series line plots, colored charts, and.
Draw a line of best fit by hand using a scatterplot then, calculate the equation of the line of best fit and extrapolate an additional point based upon. Ben-hur (1959) on imdb: plot summary, synopsis, and more imdb the contestants line up and await the signal from the governor best picture winners that are. I always advocate learning how to plot and plan because inevitably someone on the business side 25 ways to plot, plan and prep your story the basic and. The moviesite - basic plot structures many movies follow a formulaic plot structure, sometime combining two or more ideas into a single film. Most bible students readily agree with this basic plot because the plot line and continue reading in chapter two of bob enyart's best-selling book, the plot. Name _____hour_____ date_____ scatter plots and lines of best fit worksheet 1 draw a line of best fit draw a scatter plot of the data and draw in the line.
How to make a line plot line plots provide a quick and easy way to organize data and are best used when comparing fewer than wikihow's mission is to. Graphing with excel create an initial scatter plot creating a linear regression line this module will start with the scatter plot created in the basic. The 5 most common action movie plots here are the five basic action movie plots as we understand them approximately one week after the railway line was. Christopher booker's the seven basic plots is a long book it's on the order of war and peace for thickness it also gets a bit repetitive at times, but if.
|
OPCFW_CODE
|
Thought I'd write an on-topic blog post, for a change! Sadly, we had our first Fedora 19 delay last week. I actually was trying pretty hard to get this to be the first release where we didn't slip at all, but alas, this was not to be. The good news is this is nothing like the Fedora 18 situation. In F18, when we hit the first go/no-go for Alpha we were barely within shouting distance of having anything releasable, and Beta was worse: we didn't just have a few stray bugs, we were still writing bits of the installer. What's holding up Fedora 19 Alpha, in contrast, is two bugs in UEFI installation, and that's it. (Note for the haters: none of the bugs has anything to do with Secure Boot). The installer is in fine shape, except for an issue in the custom partitioning screen which we'll try and slip a fix in for. All the code that was meant to be written by now is actually written, it's all working pretty well, and most of the functionality of the installer is pretty solid. There have been a ton of UI improvements since Fedora 18 based on both online feedback and real-world usability testing and observation, as well. So it sucks that we had to slip, but it's a much different situation from Fedora 18, and it's been a lot lower stress - we're not running around trying to keep tabs on 15 bugs and 5 features that aren't written yet, right now we're really just waiting on upstream review of a patch for the last UEFI issue (the fix will need to go into the upstream kernel, and we won't put it in Fedora until it's in an approvable state for upstream merging). We made some fairly significant changes to the release validation process prior to the Fedora 19 cycle, and we've been happy with how they're working out so far. At FUDCon Lawrence, the QA members present came up with a plan to revise the "nice-to-have" process by which we track fixes that we want to take through the Alpha, Beta and Final freezes. That proposal was sent to the mailing list, where in turn, a lot of the group members who weren't at FUDCon contributed improvements to it. We rolled out the changes to what is now called the freeze exception process - because, you know, it's for freeze exceptions! - ahead of the F19 Alpha validation process, and it's been working out well so far. It really boils down to renaming "nice-to-have" to "freeze exception", but there are some devils in the details, and it makes what was a somewhat poorly-understood process much more understandable. I drafted up some substantial revisions to the layout and content of the release criteria just ahead of the Alpha release. The changes have been put into practice for the Alpha release criteria (compare with the F18 Alpha criteria to see the changes), and will go into place for the Beta and Final criteria before we reach those milestones. The idea is to have a shorter 'main' text for each criterion with details and 'commentary' as expandable extra paragraphs, and to lay out the criteria better so it's easier to read the page and to refer to specific criteria. So far the changes seem to have been integrated into the process smoothly enough. Finally, we made some tweaks to the blocker bug process and blocker review meeting process to try and expedite blocker (and freeze exception) review a bit. We introduced the concept of automatic blockers to try and cut down a bit on unnecessary review of issues that are very obviously blockers, and tried to set things up so blocker review meetings don't run on forever. Again, those changes seem to be working out well. So I'm feeling good about the Fedora 19 cycle so far! I'm hopeful there'll be very few (or no) further delays after this first one, and we'll wind up with a very solid release, with significant improvements to the new installer.
|
OPCFW_CODE
|
/* Garbage Collection
*
* Objects can be garbage collected when its guaranteed that no thread
* has a reference to the object.
*
* GC objects transiton through a lattice of states each time they are
* observed by a CPU as having been freed. When all CPUs have observed
* that the object is freed (top), delete is invoked on the object.
*
* When gc_free() is called, the object is placed on a pending queue,
* private to the invoking CPU, in the order which it was freed. We
* can infer that an object was observed by a CPU if an object freed
* later than it was observed by that CPU.
*/
#include "gc.h"
#include <cstddef>
#include <cassert>
#include <mutex>
#include <condition_variable>
class gc_flush_control
{
private:
class waiter
{
private:
std::unique_lock<std::mutex> lock;
cpu_mask_t seen;
int id;
std::condition_variable ready;
waiter *next;
public:
waiter(std::mutex &m)
: lock(m), seen(0), id(cpu_id()), next(nullptr) { }
waiter *get_next() const { return next; }
void append(waiter *w) {
if (next != nullptr)
next->append(w);
else
next = w;
}
void wait() {
while (!cpu_seen_all(seen))
ready.wait(lock);
}
void checkpoint() {
seen |= 1ULL << cpu_id();
if (cpu_seen_all(seen))
ready.notify_all();
if (next != nullptr)
next->checkpoint();
}
};
std::atomic<waiter *> flushes; // Pending flushes, oldest to youngest
std::mutex mutex;
void checkpoint_locked();
public:
gc_flush_control() : flushes(nullptr) { }
void checkpoint();
void force_checkpoint();
void flush();
};
void gc_flush_control::flush()
{
gc_checkpoint();
waiter w(mutex);
waiter *head = flushes;
if (head == nullptr) {
flushes = &w;
} else {
head->append(&w);
}
checkpoint_locked();
w.wait();
// XXX - flushes might not be &w!
flushes = w.get_next();
}
void gc_flush_control::checkpoint_locked()
{
waiter *w = flushes;
if (w != nullptr)
w->checkpoint();
}
void gc_flush_control::force_checkpoint()
{
std::unique_lock<std::mutex> lock(mutex);
checkpoint_locked();
}
void gc_flush_control::checkpoint()
{
if (flushes.load(std::memory_order_relaxed) == nullptr)
return;
force_checkpoint();
}
class gc_cpu
{
private:
std::atomic<gc_object *> pending;
gc_object *last_observed[MAX_CPUS]; // XXX
friend gc_object;
gc_object * pop_ready();
public:
gc_cpu() : pending(nullptr), last_observed {} { }
void service();
gc_object *observe(int cpu, gc_object *unless);
void checkpoint(int cpu);
}; // XXX - cache aligned
static gc_cpu cpus[MAX_CPUS];
static gc_flush_control flushes;
gc_object *
gc_cpu::pop_ready()
{
cpu_mask_t seen = 0;
std::atomic<gc_object*> *at = &pending;
while (true) {
gc_object *t = *at;
if (t == nullptr)
return nullptr;
seen |= t->seen;
if (cpu_seen_all(seen)) {
*at = nullptr;
return t;
}
at = &t->next;
}
}
void
gc_cpu::service()
{
gc_object *ready = pop_ready();
if (ready == nullptr)
return;
while (ready) {
gc_object *next = ready->next;
assert(ready->dispatched == false);
ready->dispatched = true;
delete ready;
ready = next;
}
}
void
gc_object::gc_free()
{
assert(scheduled == false);
scheduled = true;
gc_cpu &mycpu = cpus[cpu_id()];
gc_object *sub = mycpu.pending;
set_next(sub);
mycpu.pending = this;
}
void
gc_object::observe(int cpu)
{
seen |= (1 << cpu);
}
// XXX - This is broken. unless might have been recycled, in which
// case we would want to observe it.
gc_object *
gc_cpu::observe(int cpu, gc_object *unless)
{
gc_object *head = pending;
if (head != nullptr && head != unless)
head->observe(cpu);
return head;
}
// XXX - gc_cpu has to be told its id?
void
gc_cpu::checkpoint(int cpu)
{
// XXX -why not cpu_count()?
for (int i = 0; i < MAX_CPUS; ++i)
last_observed[i] = cpus[i].observe(cpu, last_observed[i]);
service();
}
void
gc_checkpoint()
{
cpus[cpu_id()].checkpoint(cpu_id());
flushes.checkpoint();
}
void
gc_flush()
{
flushes.flush();
}
void
gc_finish()
{
for (int i = 0; i < cpu_count(); ++i)
cpus[i].checkpoint(i);
for (int i = 0; i < cpu_count(); ++i)
cpus[i].checkpoint(i);
}
void
gc_exit()
{
gc_checkpoint();
cpu_exit();
gc_checkpoint();
flushes.force_checkpoint();
}
static thread_local int gc_thread_locked = 0;
void gc_lock()
{
gc_thread_locked++;
}
void gc_unlock()
{
if (--gc_thread_locked == 0) {
gc_checkpoint();
}
}
|
STACK_EDU
|
As a partner of your Digital Revolution, CSS -“CLOUD SOFTWARE SOLUTIONS” will be with you every phase of the way- from primary development to delivery, and beyond. Our well-organized team of Microsoft-approved software developers produces personalized software products, apps and operative system for SMEs, initiative, non-profit, government and financed start-ups.
With CLOUD SOFTWARE SOLUTIONS Software development begins from the idea to delivery in effective and progressive way. “
Sharing the skill of CSS -CLOUD SOFTWARE SOLUTIONS with passion to produce solutions which helps your business to empower and grow.
If you are considering for an honest and decent company to construct your operative software to transform your present structures to the next level, you have start the accurate company to help. CSS -“CLOUD SOFTWARE SOLUTIONS” has generated several organisms for organizations huge and small across a variability of diverse divisions, containing Digital Revolution and Responsive Teams.
CLOUD SOFTWARE SOLUTIONS is one of the reliable and best Software Development Company.
Advance modest superiority using software development services. Our pleased clients have perceived improved resume, efficient developments and expanded business quickness with our service.
CSS -“CLOUD SOFTWARE SOLUTIONS” has an ample team which provides software solutions quicker and enhanced
Shape your business by concentrating on what you can do best. Rest all the things leave on us. Let’s be the team that assist you in ascending your actions at comfort.
CLOUD SOFTWARE SOLUTIONS has custom software development professionals, they have the essential proficiency and the skill to provide matchless apps, products, Cloud Software solutions and other services associated to the software development sphere. As a practiced Software development company, CSS -“CLOUD SOFTWARE SOLUTIONS” influence the modern technology stack and styles fundamental in the trade to construct and provide software products that assist you shine in the digital network.
Web Application Development
CSS -“CLOUD SOFTWARE SOLUTIONS” has Responsive professionals can generate custom developed applications, platforms and products to see the necessities of your business or new endeavor.
This technology era, Working class people’s operational hours are no longer 9-5, week. They believe to be competent to work and entrance information from a company when they require if form anywhere they are, a web application proposed companies a stretchy, economical technique to meet this essential. From a rationalized variety of an internal pattern to growing the effectiveness of remote to travelling employees, to a collaborative device to help build trademark reliability with clienteles, a web application can convey. Other web application samples comprise:
CSS -“CLOUD SOFTWARE SOLUTIONS” determine the whole thing there is to study about your industry, what a new app desires to provide and to whom. If you would resemble us to, CLOUD SOFTWARE SOLUTIONS will also recommend a few of our own concepts – we cherish future with renewed solutions that actually make the technology effort hard.
Responsive Web Applications
All the web applications CSS-“CLOUD SOFTWARE SOLUTIONS” develop are receptive. This means they appear great and work flawlessly on every screen of any dimension, desktop, table or smartphones. We do this by adjusting menus, controls and further graphics to suitable the device centered on the existing screen width and height, confirming the user constantly acquires the best potential practice irrespective of what device they are using.
Web Application Technology and Development Process
CSS-“CLOUD SOFTWARE SOLUTIONS” know that trying to imagine a complete web app is an immense asks, so since initially in the project, we will produce a sample that we will appraise during the development. This empowers our clients, and other shareholders, to ‘play’ with the app beside the technique and flag any scheme or functionality that is not relatively precise.
Python Web Development
As an apparent Python web application development company, CSS-“CLOUD SOFTWARE SOLUTIONS” has been providing to clients from all over the sphere. Python is one of the most widespread and dominant programming languages, that can brand your website and apps extremely cooperative.
The energetic and flexible environment of the Python programming language marks it the first optimal for your website. It is a transcriber, general-purpose, sophisticated programming language established on OOPs. Python emphases on coding process to eliminate repeat yourself and Rapid Application Development.
WHAT CSS-“CLOUD SOFTWARE SOLUTIONS” OFFER
We bring you a spectrum of Python Development services, making the web apps and sites scalable.
It is essential for you to develop the technologies modernized from time to time. To that extent, you can employ python developers from our platform to migrate from the mature versions to the modern ones. We also proposed migration services from further programming languages like Java and PHP to python.
As a well-known company, CLOUD SOFTWARE SOLUTIONS concentrate in the language and develops robust website and apps. Our python web development services are constantly concerned with interpreting your concepts into realism. Count on us for a hassle-free practice to grow your business website and app.
CSS-“CLOUD SOFTWARE SOLUTIONS”, Offshore Software Development Establishment you can Count on!
CSS-“CLOUD SOFTWARE SOLUTIONS” can support Enterprises Quicken Time-To-Market and Report Key Business Tasks by Providing Accessible and Extensible Software Solutions.
The vibrant and ever-evolving operational background of currently necessitates businesses to be agile and active. Business experiments can come from any direction – disrupting industry developments, economical density, and fluctuations in guidelines or developing consumer prospects. In order to tackle them positively, companies need to implement custom software explanations that are precisely planned and developed in harmony with their exclusive business processes.
At CLOUD SOFTWARE SOLUTIONS, we deliver small and big companies with economical software solutions that help them quicken time-to-market, lessen operative costs and drive extreme significance for their customers. We are one of the foremost offshore software development companies in India that contributions establishments in improving their web existence and restructuring business developments. Approving a counseling attitude, our team of professionals first attends to your business philosophies and then turns them into factual world software applications.
Custom Software Keys to Crack Complex Business Problems!
In today’s fast fluctuating business atmosphere, it develops all the extra important for companies to occupy custom software and web applications in directive to progress customer appointment, improve business processes and apprehend amplified effectiveness. At CSS-“CLOUD SOFTWARE SOLUTIONS”, we influence established development approaches and incomparable technical skill of our offshore software developers for important, scheming and developing robust software and mobile applications personalized to meet your precise business necessities and end objectives. Our assortment of software outsourcing services in India contains:
What Sets CSS-“CLOUD SOFTWARE SOLUTIONS” Separately in Offshore Software Development.
As one of the principal offshore software development companies in India, CLOUD SOFTWARE SOLUTIONS has a confirmed path record of bringing robust software solutions to establishments globally. We’re a team of extremely experienced and competent software developers in India that works in accordance with consistent approaches and measures. Here are some motives due which consumers pick us as their offshore software development company in India:
|
OPCFW_CODE
|
Some Windows users reported that the DNS server could not create zone %1 from registry data. If you open the Event Viewer, you may see Event ID 504 – DNS server could not create zone. In this post, we will discuss this issue and see how it can be resolved.
What is a DNS server zone?
A DNS Server zone is a special space for the Domain Name Space namespace. This area is managed by one or more specific servers and helps keep track of information about names and their addresses, i.e., DNS records for a specific domain or set of domains. There are various types of zones with distinctive features, starting with the Primary zone, Secondary zone, Stub zone, Forward-lookup zone and many more.
Fix Event ID 504, DNS server could not create zone
If the DNS server is not able to create a new DNS Zone, and you see Event ID 504 in the Event Viewer of your Windows computer, execute the solutions mentioned below:
- Recreate the zone
- Check the DNS configuration settings
- Make sure that the DNS client computer can resolve names
Let’s get started with the first solution.
1] Recreate the zone
Often, information for the zone stored in the registry value gets corrupted or is missing some components. This can affect the DNS server and its ability to create DNS Zones. Therefore, we recommend deleting the corrupted ones and replacing them with functional ones. Let us first see how to create a zone using the Server Manager.
- Open the Server Manager by clicking on Start > Administrative Tools > Server Manager on the DNS server.
- Go to the console tree, double-click on Roles, and then on DNS Server.
- Now, double-click on DNS. Expand the DNS server as well as the folder in it.
- Right-click on the zone, and then hit the Delete button. Similarly, right-click the folder and this time select the New Zone option.
Follow all the instructions mentioned on the screen to recreate the zone. If you cannot delete the zone using this method, do the same using the Registry Editor.
It is necessary to carefully make amendments to the Registry entry, as any minor mishap can have a greater impact on the system. Therefore, tread this process with utmost care while doing it, and don’t forget to create a backup of the registry.
- Go to the DNS server, click Start, and go to the Start Search.
- There, type Regedit, hit the Enter button, and navigate to the Console tree.
- There, expand the following key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\DNS Server\Zones
- Now, right-click on the key for the zone, and then select the Delete option.
Once done, check if the issue persists.
2] Check the DNS Configuration settings
In this solution, we will ensure that the DNS configuration settings are not the reason behind this error. Various instances are recorded where improper configuration causes DNS servers to function incorrectly. To verify the DNS configuration, follow the solutions prescribed below:
- Start Server Manager by clicking Start, then Administrative Tools, and lastly, Server Manager on the DNS server.
- In the console tree, double-click on Roles > DNS Server > DNS.
- Right-click on the DNS server, select Properties, and check the settings on each tab to ensure they have all the intended values.
- Once done, expand the DNS server, and then the Zone folder, and select Properties.
- Again, check all the values and settings on all the tabs.
Repeat the process for each zone and see.
3] Make sure that the DNS client computer is able to resolve names
In this solution, we are going to make sure that the client’s computer is able to resolve names, i.e., it can resolve Domain names to IP addresses. Otherwise, this not only leads to issues such as DNS not being able to create a new zone, but other underlying issues such as misconfigured DNS settings, network connection issues, and so on.
Here’s how to make sure that the DNS Client computer can resolve names:
- Click Win + R to open the Run dialogue box, type cmd, and then hit the OK button on the DNS Client computer.
- Now, type ping hostname (for example, ping www.google.com) and hit the Enter button.
- If the client is able to resolve the ping, you will see the following message:
Pinging hostname [ip_adress]
- If not, the following message will appear on the screen:
Ping request could not find the hostname
If the client cannot resolve names, then ensure that the Internet, firewall, security software, and corrupted cache are not the reason behind this complication. Once verified, try resolving names on the DNS client computer.
How many zones can we create in DNS?
The simple answer to this question is that there is no strict limit to the number of DNS zones. However, it is necessary to consider various factors that this depends on. This includes the current DNS server software, available resources on the server, and the specific requirements of the network environment. Commercial DNS servers and enterprise-grade solutions are known to accumulate more zones than simpler implementations.
|
OPCFW_CODE
|
PaymentV1 Error : (invalid,bad_signature)
Hello, we’re trying to make a PaymentV1 transaction as described on readme.md. We’re using the latest versión of the repository, we generate the signed payment and submit it, but when looking at the helium api response for the transaction(https://api.helium.io/v1/pending_transactions/xrHd9AIx9Jd3l_TZzvGAc8wKYg9dUjtzrhYkChK8g9s)
this is what we get:
"status": "failed",
"failed_reason": "{invalid,bad_signature}",
This are the imports
import * as crypto from '@helium/crypto';
import * as transactions from '@helium/transactions';
import * as http from '@helium/http';
import Address from '@helium/address';
The req body is
{ keypair : ['one','two','three','four',...,'twelve']
,payeeAdress : a b58 string
,amount: a number, in this case 0.00001
}
The client and vars are initialized and set correctly
const client = new http.Client();
const vars = await client.vars.get();
transactions.Transaction.config(vars);
We also initialize the payer account and address and also the payee address
const payer = await crypto.Keypair.fromWords(keypair);
const payee = Address.fromB58(payeeAdress);
const account = await client.accounts.get(payer.address.b58);
We build the payment txn
const paymentTxn = new transactions.PaymentV1({
payer: payer.address,
payee: payee,
amount: amount,
nonce: account.speculativeNonce! + 1,
});
We sign it and submit it to the Blockchain API
const signedPaymentTxn = await paymentTxn.sign({payer : payer})
[...] (then…)
client.transactions.submit(signedPaymentTxn.toString())
(and print the request before send…)
console.log(signedPaymentTxn);
PaymentV1 {
type: 'payment_v1',
payer: Address {
version: 0,
netType: 0,
keyType: 1,
publicKey: Uint8Array(32) [
221, 47, **, ***, ***, ***, **, ***,
***, ***, ***, ***, **, **, **, **,
***, **, **, **, ***, ***, **, ***,
*, ***, ***, **, ***, ***, **, **
]
},
payee: Address {
version: 0,
netType: 0,
keyType: 1,
publicKey: <Buffer ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **>
},
amount: 0.0001,
nonce: 5,
fee: 30000,
signature: Uint8Array(64) [
***, ***, ***, ***, **, ***, **, ***, **, ***, ***,
***, ***, ***, ***, ***, ***, ***, ***, ***, **, ***,
**, ***, ***, ***, ***, ***, ***, ***, ***, ***, ***,
***, **, ***, **, ***, ***, ***, ***, ***, ***, ***,
***, **, ***, ***, ***, ***, ***, ***, ***, *, ***,
**, ***, ***, ***, *, ***, ***, **, *
]
}
Also I will mentionate that we’re sending 0.0001 HNT, and in the response json object it shows
"amount": 0
Anyway, probably the amount is 0 because of the bad_signature error.
Thanks for the detailed report, a bad_signature error can happen when the format of the transaction is incorrect. I believe the problem here is the amount. If you are trying to send 0.0001 HNT this should be added to a PaymentV1 transaction in bones not as a float amount. HNT supports 8 decimal places so we need to multiply float amounts by 100000000. There is a convenient class Balance to do the calculations for you. You can use Balance.fromFloat(0.00001, CurrencyType.networkToken).integerBalance to make sure the value is correct. Accessing the floatBalance from that would return back the 0.00001.
Yes, that solved the issue.
Thank you for answering and i'll take your suggestion and move to PaymentV2
|
GITHUB_ARCHIVE
|
Dr. Lei Zhang
- Real Estate and Housing Economics
- Zhang, L., Leonard, T., and Bitzan, J. (2022). Impacts of the COVID-19 Pandemic on House Prices: Heterogeneous Impacts Over Time and Across the House Price Distribution. Journal of Real Estate Research
- Zhang, L., and Leonard, T. (2021). External validity of hedonic price estimates: Heterogeneity in the price discount associated with having Black and Hispanic neighbors. Journal of Regional Science, 61 (1), pp. 62--85.
- Peng, L., and Zhang, L. (2021). House prices and systematic risk: Evidence from microdata. Real Estate Economics
- Leonard, T., Yang, X., and Zhang, L. (2021). The impact of land use regulation across the conditional distribution of home prices: an application of quantile regression for group-level treatments. The Annals of Regional Science, 66 (3), pp. 655--676.
- Leonard, T., Yang, X., Zhang, L., and Reed, C. (2020). Impact of Property Tax Abatement on Employment Growth. Economic Development Quarterly, 34 (2), pp. 209--221.
- Zhang, L., and Leonard, T. (2019). Flood hazards impact on neighborhood house prices. The Journal of Real Estate Finance and Economics, 58 (4), pp. 656--674.
- Zhang, L., and Yi, Y. (2018). What contributes to the rising house prices in Beijing? A decomposition approach. Journal of Housing Economics, 41, pp. 72--84.
- Lim, S. H., and Zhang, L. (2017). Does casino development have a positive effect on economic growth?. Growth and Change, 48 (3), pp. 409--434.
- Zhang, L., Leonard, T., and Dias, R. (2017). Foreclosed and sold: An examination of community and property characteristics related to the sale of REO properties. Urban Affairs Review, 53 (5), pp. 924--949.
- Leonard, T., Jha, N., and Zhang, L. (2017). Neighborhood price externalities of foreclosure rehabilitation: an examination of the Neighborhood Stabilization Program. Empirical Economics, 52 (3), pp. 955--975.
- Zhang, L., and Yi, Y. (2017). Quantile house price indices in Beijing. Regional Science and Urban Economics, 63, pp. 85--96.
- Miljkovic, D., Dalbec, N., and Zhang, L. (2016). Estimating dynamics of US demand for major fossil fuels. Energy Economics, 55, pp. 284--291.
- Zhang, L. (2016). Flood hazards impact on neighborhood house prices: A spatial quantile regression analysis. Regional Science and Urban Economics, 60, pp. 12--19.
- Zhang, L., Leonard, T., and Murdoch, J. C. (2016). Time and distance heterogeneity in the neighborhood spillover effects of foreclosed properties. Housing Studies, 31 (2), pp. 133--148.
- Leonard, T., Zhang, L., and Hoehner, C. (2015). Variations in park facility valuations across neighborhoods. Applied Spatial Analysis and Policy, 8 (1), pp. 45--67.
- Zhang, L., and Leonard, T. (2014). Neighborhood impact of foreclosure: A quantile regression approach. Regional Science and Urban Economics, 48, pp. 133--143.
- Dave, C., Dressler, S. J., and Zhang, L. (2013). The bank lending channel: a FAVAR analysis. Journal of Money, Credit and Banking, 45 (8), pp. 1705--1720.
|
OPCFW_CODE
|
var scorearr = [];
var resultarr = [];
// function sortResults(){
// $('#loading').addClass('active');
// $('#tempresults .itema').each(function(){
// var score = 0;
// var reviews = parseInt($(this).find('#result-review-rating').data('reviews'));
// var ratings = parseFloat($(this).find('#result-review-rating').data('ratings'));
// var distance = parseFloat($(this).find('#result-distance').text());
// var verified = parseInt($(this).find('#result-verified').length);
// var verified_score = -5;
// if(verified == 1){
// verified_score = 10;
// }
// var time_elapsed = parseInt($(this).find('#result-last-seen').data('days'));
// var time_to_divide = 24; // minutes
// if(time_elapsed != 0){
// time_to_divide = time_to_divide * time_elapsed;
// }
// if(ratings != 0){
// if(reviews == 0 || reviews == 1){
// score = (1 * Math.pow(2, ratings)) + (3/distance) + (verified_score) + (1/time_to_divide);
// }
// else{
// score = (Math.log2(reviews) * Math.pow(2, ratings)) + (3/distance) + (verified_score) + (1/time_to_divide);
// }
// }
// else{
// score = (3/distance) + (verified_score) + (1/time_to_divide);
// }
// score = parseFloat(score.toFixed(4));
// scorearr.push(score);
// $(this).attr('data-score', score);
// });
// // sort and append to sorted results
// scorearr.sort(function(a, b){
// return b - a;
// });
//
// //appending results
// scorearr.forEach((item, i) => {
// $('#tempresults [data-score="'+item+'"]').appendTo('#sortedresults');
// });
// $('#loading').removeClass('active');
// }
function setValues(){
// localStorage.removeItem('ratings');
// localStorage.removeItem('distances');
var distances = JSON.parse(localStorage.getItem('distances'));
var ratings = JSON.parse(localStorage.getItem('ratings'));
var avg_distance = 0;
var avg_rating = 0;
var deviation_distance = 0;
var deviation_rating = 0;
if(distances != null && distances.length != 0){
var avg_distance = distances.reduce(addElementsDistance)/distances.length;
function addElementsDistance(sumdistance1, ndistance1){
return sumdistance1 + ndistance1;
}
var deviation_distance = 0;
var sum_distance = 0;
distances.forEach((item, i) => {
sum_distance += (item - avg_distance)*(item - avg_distance);
});
deviation_distance = sum_distance/distances.length;
// var deviation_distance = (distances.reduce(addSquaresDistance)/distances.length) - Math.pow(avg_distance, 2);
// function addSquaresDistance(sumdistance2, ndistance2){
// return sumdistance2 + ndistance2*ndistance2;
// }
}
if(ratings != null && ratings.length != 0){
var avg_rating = ratings.reduce(addElementsRating)/ratings.length;
function addElementsRating(sumrating1, nrating1){
return sumrating1 + nrating1;
}
var deviation_rating = 0;
var sum_rating = 0;
ratings.forEach((item, i) => {
sum_rating += (item - avg_rating)*(item - avg_rating);
});
deviation_rating = sum_rating/ratings.length;
// var deviation_rating = (ratings.reduce(addSquaresRatings)/ratings.length) - Math.pow(avg_rating, 2);
// function addSquaresRatings(sumrating2, nrating2){
// return sumrating2 + nrating2*nrating2;
// }
}
avg_distance = parseFloat(Math.abs(avg_distance).toFixed(2));
deviation_distance = parseFloat(Math.abs(deviation_distance).toFixed(2));
avg_rating = parseFloat(Math.abs(avg_rating).toFixed(2));
deviation_rating = parseFloat(Math.abs(deviation_rating).toFixed(2));
$('#avg_distance').val(avg_distance);
$('#deviation_distance').val(deviation_distance);
$('#avg_rating').val(avg_rating);
$('#deviation_rating').val(deviation_rating);
// console.log(ratings.reduce(addSquaresRating)/(ratings.length));
}
// function temp(){
// var distances = [0.5, 0.2, 1.2, 1.5, 0.3, 0.1];
// var ratings = [5, 4.2, 4.6, 4.5, 3.9, 5];
// localStorage.setItem('distances', JSON.stringify(distances));
// localStorage.setItem('ratings', JSON.stringify(ratings));
// }
function sortResults2(){
var arr = [];
var idarr = []
var table = document.getElementById("table");
var rowlen = table.rows.length;
$('#loading').addClass('active');
$('#loading_text').html('Sorting results');
var celllen = 5;
for (var i = 1; i < rowlen; i++) {
var subarr = []
idarr.push(table.rows[i].cells[0].innerHTML);
for (var j = 1; j < celllen; j++){
subarr.push(table.rows[i].cells[j].innerHTML);
}
arr.push(subarr);
}
arr = JSON.stringify(arr);
$.ajax({
type: "POST",
url: '/consumers/sortResults',
data: {
arr: arr,
idarr: idarr,
},
success: function(dataarr){
console.log(dataarr);
idarr.forEach((item, i) => {
$('#tempresults [data-id="'+item+'"]').attr('data-probability', dataarr[i]);
});
dataarr.sort(function(a, b){
return b - a;
});
dataarr.forEach((item, i) => {
$('#tempresults [data-probability="'+item+'"]').appendTo('#sortedresults');
});
$('#loading').removeClass('active');
}
});
}
$(document).ready(function(){
// temp();
if($('#search').length != 0){
// sortResults();
sortResults2();
}
else{
// setting avg and deviation values
setValues();
}
});
|
STACK_EDU
|
I’ve seen a few people assert the precompiled headers are a pain in the butt, or not workable for large scale projects. In fact, it’s incredibly easy to add precompiled headers to a GCC-based project, and it can be quite beneficial.
Since I’m mostly familiar with Makefiles, I’ll present an example here that uses Make. It trivially extends to other similar build systems such as SCons, NMake, or Ant, but I’m not so sure about Visual studio projects. This example builds a single static library and several test applications. I’ve stripped out most of my compiler flags for brevity.
# boilerplate settings... SHELL = /bin/bash CXX = g++ -c CXXFLAGS += -std=c++98 -pedantic -MMD -g -Wall -Wextra LD = g++ LDFLAGS += -rdynamic -fno-stack-protector AR = ar # generic build rules # $@ is the target, $< is the first source, $^ is all sources define compile $(CXX) -o $@ $< $(CXXFLAGS) endef define link $(LD) -o $@ $(filter %.o,$^) $(filter %.a,$^) $(LDFLAGS) endef define ar $(AR) qsc $@ $(filter %.o,$^) endef # all library code is in src/ # all test applications are single-source # e.g. testfoo is produced from from testfoo.cxx and libsseray.a TEST_SRC = $(wildcard test*.cxx) LIB_SRC = $(wildcard src/*.cxx) TESTS = $(basename $(TEST_SRC)) LIB = libsseray.a all : $(TESTS) $(TESTS) : $(LIB) $(TESTS) : % : %.o $(link) %.o : %.cxx $(compile) $(LIB) : $(LIB_SRC:cxx=o) $(ar) # gcc-provided #include dependencies -include $(TEST_SRC:cxx=d) $(LIB_SRC:cxx=d) clean : rm -f $(LIB) $(TESTS) $$(find . -name '*.o' -o -name '*.d')
In order to use a precompiled header, this is what needs to be added to the Makefile. There are no source code modifications at all. I created a file pre.h that includes all of the system C and C++ headers that I use (in particular <iostream> is a big expense for the compiler).
# PCH is built just like all other source files # CXXFLAGS must match everything else pre.h.gch : pre.h $(compile) # all object files depend on the PCH for build ordering $(TEST_SRC:cxx=o) $(LIB_SRC:cxx=o) : pre.h.gch # this is equivalent to adding '#include <pre.h>' to the top of every source file $(TEST_SRC:cxx=o) $(LIB_SRC:cxx=o) : CXXFLAGS += -include pre.h # pre.h.gch should be cleaned up along with everything else clean : rm -f (...) pre.h.gch
The project itself is fairly small—16 source files totaling 1800 SLOC—but this small change just decreased my total build time from 12 to 8 seconds. This is entirely non-intrusive to the code, so adding it is a one-time cost (as opposed to the Microsoft stdafx.h approach, which is O(N) cost in the number of source files). It is also easy to disable for production builds, if you only want to use the PCH machinery for your day-to-day edit-compile-test cycle.
|
OPCFW_CODE
|
How to build Windows and Mac plugins simultaneously in Eclipse
After developing the plugin, you must build it for Windows and Mac OS simultaneously during the build process.
I've had no problems developing and building on MacOS so far.
But now we have to build on Windows.
Libraries (e.g. SWT) used when developing on MacOS are available for both Mac and Windows, so plug-ins built on MacOS can also be used on Windows.
However, when building on Windows, there are libraries that exist only on MacOS and not on Windows.
org.eclipse.swt.internal.cocoa.NSObject
org.eclipse.swt.internal.cocoa.NSString
org.eclipse.swt.internal.cocoa.NSView
org.eclipse.swt.internal.cocoa.OS
Because cocoa is a library for MacOS, even if you add this library to the target platform, Windows will not be able to load this library.
What should I do?
It is unclear to me what you are asking. The plugin/bundle org.eclipse.swt is platform-independent, provides the API and requires a platform-specific plugin, or more precisely a fragment/bundle with the platform-dependent implementations. For Windows, for instance, there is the fragment org.eclipse.swt.win32.win32.x86_64. If your code depends on one of the mentioned classes like org.eclipse.swt.internal.cocoa.NSObject, you are doing something wrong. In general, your code should not reference anything from another plugin/bundle from a package that has internal in its name.
You should be using [tag:tycho] to do the build, this will choose the correct fragment for each platform. You should not normally be using any of those internal classes - if you are you need to put them in a platform specific plugin or fragment just like SWT does.
@howlger To build on both MacOS and Windows
I added both org.eclipse.swt.win32 and org.eclipse.swt.cocoa to the target platform.
At this time, we confirmed that only libraries appropriate for the operating system are loaded. To build on both MacOS and Windows
I added both org.eclipse.swt.win32 and org.eclipse.swt.cocoa to the target platform.
At this time, we confirmed that only libraries appropriate for the operating system are loaded. This is exactly what I want to ask.
@howlger There are libraries for Windows and Mac OS in the target platform path, so I want to load them. You just need to be able to build regardless of execution.
@greg-449 I don't know what tycho is...
Currently Windows cannot load libraries for macOS and
An x is appearing. Can this be resolved?
You will have to learn about tycho. It is not possible to do a single build that runs on all platforms, you need a separate build for each platform which tycho will do for you. tycho will select the appropriate SWT fragments for the different platforms. tycho tutorial
@greg-449 I don't think the problem I'm experiencing will be solved with Tycho.
This is because plugins for Windows, Mac, and Linux (Ubuntu) are already built and run normally with my build script.
If this is an issue that needs to be done with Tycho, I think it will only be possible to build for a single platform.
My point is, how do you load libraries for Mac OS on Windows? (I think the question itself makes no sense, but...)
Tycho will solve your problem because it will not include the macOS components in the build for Windows. (assuming everything has the correct platform rquirements set in the MANIFEST.MF) You cannot run macOS code on Windows.
@grep-449 So how do I fix the light coming on in Eclipse? Anyway, Eclipse for Windows cannot load libraries for Mac.
As I already said you have to build separate versions of Eclipse for each platform containing only the plugins for that platform.
@greg-449 So... in other words, you need to build a separate Eclipse corresponding to the platform.
To build a plug-in for Windows, you need a Windows PC:
You need a Mac to build plugins for Mac.
If so, you must have at least two physical PCs (for different operating systems).
If this is true, do I really need to build it using tycho?
Is it just a maintenance aspect?
No, tycho can build code for all the platforms on a single machine. It will produce multiple versions of Eclipse targeting the different platforms. So you will get one that will run on Windows, one for macOS Intel, one for macOS Apple Silicon, ...
Let us continue this discussion in chat.
@greg-449 So, I understand that tycho is a tool for creating a complete build that runs on multiple platforms. However, when you run Eclipse on Windows and clone a project from git, isn't it natural that an error occurs because the project uses a class like NSObject that is used on Mac? I asked if this could be resolved
No, as I have already said multiple times you build different versions of Eclipse for different platforms. A plugin that is for macOS should specify that in the MANIFEST.MF and tycho will not include it in builds for other platforms. This is a long established system that Eclipse itself uses to build different versions of SWT for different platforms.
The project should not use internal classes like org.eclipse.swt.internal.cocoa.NSObject. This is obviously a bug in the code of the project that needs to be fixed on the project side and that have been overlooked all the time until you decided to also support Windows. This is not a build issue, it's a bug in the project in the code you didn't show. You question is an XY problem, not showing the root problem Y.
|
STACK_EXCHANGE
|
Register COM reference to 64-bit Windows 7 machine
I am writing a C# program that interface with COM object through COM interop.
I have a third-party program that register itself as the COM server when I execute the Application. This works fine in 32-bit Windows Vista and I can interface with the interop just fine. (The reference show up in "COM" tab from Visual Studio when you click "Add Reference")
However, the reference does not show up in "COM" tab on my 64-bit Windows 7 machine after I execute the application. Any thoughts, why would this happen? I actually tried using regsvr32.exe to register the application manually but it didn't work either (error message saying "entry-point DllRegisterServer was not found)
You are not going to be able to use it as long as it doesn't show up in the COM tab. The regsvr32.exe utility is for DLLs, this however sounds like an EXE. If it is a DLL then it needs to be registered with the 32-bit version of regsvr32.exe, the one in c:\windows\syswow64. If it is an EXE then the normal way to get it to register itself is by running it with the /regserver command line option.
Mumble.exe /RegServer
Additionally, if this is a DLL or an EXE for which you don't have a 64-bit proxy/stub then you'll have to force your app to run in 32-bit mode. Project + Properties, Build tab, Platform Target = x86.
If all else fails, you really do need support from the vendor of this program. Surely they'll have an update available that is verified to work properly on 64-bit operating systems. If they are no longer around, running this in a virtual machine is always a possibility.
Thank you very Hans, this is very helpful. The file I was trying to register is actually .exe, no wonder I couldn't do it. I did already set the Platform target = x86. Where is Mumble.exe located? How do I get it?
I can't help you find the .exe, don't you already know? It's the 3rd party program you mention in your OP.
Oh sorry, I probably misunderstood you. I thought you mentioned about a tool called Mumble.exe -- it's probably just yr example. Of course I would know where that 3rd party program is located.
I'll call it Foo.exe next time :)
If it is a managed dll then you might try using RegAsm
REGASM AssemblyName.dll /tlb:AssemblyName.tlb
You may find this helpful as I needed to recompiled and build 64 bit proxy stub for the COM server from C++ myself and it kept failing when trying to register the server using /regserver. Here is and thread from miscrosoft that helped me resolved this issue. Basically you need to use this instead /RegServerPerUser, but go through the thread if you get into this situation after the answers from above.
http://social.msdn.microsoft.com/Forums/en/vcprerelease/thread/11f01ceb-52a4-438f-b7ef-727ce7a3e191
|
STACK_EXCHANGE
|
Does List.retainAll() use HashMap internally?
I am purposefully violating the hashCode contract that says that if we override equals() in our class, we must override hashCode() as well, and I am making sure that no Hash related data structures (like HashMap, HashSet, etc) are using it. The problem is that I fear methods like removeAll() and containsAll() of Lists might use HashMaps internally, and in that case, since I am not overriding hashCode() in my classes, their functionality might break.
Can anyone please conform whether my doubt is valid ? The classes contain a lot of fields that are being used for equality comparison, and I will have to come up with an efficient technique to get a hashCode using all of them. I really don't require them in any hash-related operations, and as such, I am trying to avoid implementing hashCode()
Which implementation of the List interface?
You could: 1. Look at the source code 2. Test it
You can lookup the source code of for example ArrayList in the src.zip that's in your JDK installation directory. But really, why are you "purposefully violating the hashCode contract"? Sounds like a recipe for disaster. Isn't there a better solution?
+1 look at the source code, exactly
Even if the current implementation of the List variant you use works for you, it may change in the next release, triggering a hard to find bug. And even if you consistently do not use hash data structures, there is no guarantee that new members of your project team (possibly after you have left) won't ever do it, again triggering a hard to find bug.
A surprising number, yes. Honestly, the question should really be downvoted for total lack of research.
Don't. Whatever you're doing, this is the wrong way to go about it. Step away from the keyboard and think about the problem some more.
But, as others have pointed out, the question is unanswerable in its current form (e.g. which List implementation?) and even if it holds right now, it won't necessarily hold in future if one has deliberately broken the contract. They're called contracts for a reason, after all.
@Gabe: +1... Exactly. It's a bit easy to answer the question when one does know the answer and say "Use the source, Luke" when one doesn't know the answer. For example should have this question been: "Do I need a proper hashCode() implementation if I put elements in an HashMap?" I'm willing to be a lot of money that the answer would not have been "look at the source code / test it".
Moreover this "answer" isn't an answer: as Péter Török noted, even if things do work with the current implementation of the List variant OP is using, it may break in the next release. So "Test it" is hardly a sound advice. If you believe this answer should be closed, vote for close instead of trying to gain rep...
Its an ArrayList implementation
I think a simple way to test if hashCode() is being used anywhere is to override hashCode() for your class, make it print a statement to the console (or a file if you prefer) and then return some random value (won't matter since you said you don't want to use any hash-based classes anyway).
However, i think the best would be to just override it, i'm sure some IDE's even can do it for you (Eclipse can, for example). If you never expect it to get called, it can't hurt.
Thanks for the eclipse tip.. that was very helpful
From AbstractCollection.retainAll()
* <p>This implementation iterates over this collection, checking each
* element returned by the iterator in turn to see if it's contained
* in the specified collection. If it's not so contained, it's removed
* from this collection with the iterator's <tt>remove</tt> method.
public boolean retainAll(Collection<?> c) {
boolean modified = false;
Iterator<E> e = iterator();
while (e.hasNext()) {
if (!c.contains(e.next())) {
e.remove();
modified = true;
}
}
return modified;
}
I checked the contains() method too.. it doesn't actually use any hash-related functions.. but it might change in the future.. so the best way would be to implement a hashCode() implementation, I guess.
If you do list.retainAll(hashSet), defining a hashCode/equals would be used. If you do list.retainAll(list), only equals will be used. You can always do list.retainAll(new HashSet(collection)) to ensure a hash lookup is used.
As for
I will have to come up with an efficient technique to get a hashCode using all of them
You don't need to use all of the fields used by equals in your hashCode implementation:
It is not required that if two objects are unequal according to the equals method, then calling the hashCode method on each of the two objects must produce distinct integer results. However, the programmer should be aware that producing distinct integer results for unequal objects may improve the performance of hashtables.
Therefore, your hashCode implementation could be very simple and still obey the contract:
public int hashCode() {
return 1;
}
This will ensure that hash-based data structures still work (alebit at degraded performance). If you add logging to your hashCode implementation, then you could even check if it is ever called.
|
STACK_EXCHANGE
|
Missing "page" Size/Unit in metadata
Among the numerous size units (such as the all time favourite "units":) there is no "pages", yet this is a very natural (but of course imprecise) way of quantifying printed and written text. Esp. for historical texts.
I'd suggest adding it.
This is a broader question of how much we want to diverge from MetaShare (MS) metadata schema. This "size" is taken over from MS minimal schema, along with the set of possible values. I agree it is not ideal, but I am not quite sure about the solution:
Do we try to change this via MetaShare (in CRACKER project right now)? That would be a lot of work with uncertain result.
Do we just change it as we see fit, breaking MS compatibility? Is it worth it?
Do we just keep it as it is, even though not ideal?
Well, I'd go for 2, the public spirited way is 1, and for you the best is 3.
Anyway, its not a big problem, more a small irritation - we can easily close the issue with 3.
I agree we should look into 2) to see what implication it would have. I can't say right now, and the decision is also not quite up to me, but we will look into it.
So I would keep this open for now.
Great, I think this is the right thing to do!
I found another missing unit: n-grams.
The vocabulary has from unigrams up to 5-grams, but is missing the generalisation, which is just silly.
A student just made a n-gram list for Slovene, where the whole point is that it is a ranked list containing (the most salient) uni, bi, etc grams. Currently, if want to submit it, we have to say "entries" or even "units".
Maybe if there is a "batch" request for extension, it could be more convincing..
According to
http://www.meta-net.eu/meta-share/META-SHARE documentationUserManual.pdf
sizeUnit is open controlled vocabulary which means we can add new units easily.
Moreover, n-gram should be there.
On 08 Jul 2015, at 18:04, Jozef Misutka<EMAIL_ADDRESS>wrote:
According to
http://www.meta-net.eu/meta-share/META-SHARE documentationUserManual.pdf
sizeUnit is open controlled vocabulary which means we can add new units easily.
Moreover, n-gram should be there.
Clearly one of the best candidates for managing the vocabulary jointly in OpenSkos/Clavas for the whole CLARIN. We all need the same set of units, I think.
According to
http://www.meta-net.eu/meta-share/META-SHARE documentationUserManual.pdf
sizeUnit is open controlled vocabulary which means we can add new units easily.
Moreover, n-gram should be there.
@vidiecan The documentation says so but the .xsd used to check validity is implemented differently https://github.com/metashare/META-SHARE/blob/master/misc/schema/v3.0/META-SHARE-SimpleTypes.xsd#L45 (if I'm reading it right)
@vidiecan The documentation says so but the .xsd used to check validity is implemented differently https://github.com/metashare/META-SHARE/blob/master/misc/schema/v3.0/META-SHARE-SimpleTypes.xsd#L45 (if I'm reading it right)
In case the xsd wins, we can always map them to "other".
|
GITHUB_ARCHIVE
|
Android Development Training
PROJECT BASED ANDROID DEVELOPMENT TRAINING
In Android development training from JNtech Networks, we will learn how to design and deploy the various kinds of Android apps. In this, you will learn the basic variety of concepts and android programming from a simple hello world text to the complex programmed applications of android devices.
Here you will learn the android app development with the trainers, having more than 6 years of experience in developing android apps in the industries. We will provide the live projects and the instructor based training for the requirements of the market.
Boosts your journey with an Android development course
Android is a Linux-based operating system for mobile devices such as smartphones and tablet computers and it is also a software package. Mainly the Java language is used to write the android code even though other languages can be utilized to make android applications.
The aim of the Android training course is to offer developers easy and complete knowledge of Android App Development with our classroom training. The Android Development Training course offers a series of sessions & lab assignments that introduce and explain Android features that are used to code, debug and deploy Mobile Applications
Now building Android Apps is the base of our advanced Android curriculum. Android development course combine theory and practice to help you build great apps the right way. In the Android course, you will easily work with a trainer step-by-step to develop a cloud-connected Android app and study best practices of mobile development, and Android development in particular.
Why Android Training is Required?
What makes the Android course so recognizable is its free and open-source nature which prepares for a fair measure of adaptability for them to execute their originality. This software development is one of the main motivations accounting for the presence of this Linux-based operating framework.
It is the fastest developing operating framework and brilliant vocation way for java candidates. Users can utilize it by enrolling in the Best Android Course for an assortment of projects, together with flag processing and interchanges, photo or video processing, control structures, investigation and can estimate, finance, and science.
Greatly valuable than one million engineers and researchers in big industry and the scholarly world utilize Android, the local language of specialized computing and one-stop arrangement supplier and best situation supplier to hopefuls. Our professionals understand how critical it is for you to capably add to the ability to gather better applications and be effective in your job. To reach your needs of every type of training, we have made Android Training Course more effective, going from trainee to more propel material.
Android Training Objectives
● Upon finishing of Android course, attendees will be able to
● Understand Android platform architecture
● Operate Android SDK’s Emulator to test and debug applications
● Construct user interfaces with built-in views and layouts
● Design, develop, debug, and deploy Android applications
● Define custom view and layout
● Develop SQLite Database
● Write multimedia Android applications
● Write location-based applications
● Secure Android applications
● Interact with Servers using Web Services
Unlock New Opportunities
JNtech Networks build confident developers who think beyond industrial jobs and hike their ideas into self-created entrepreneurship ventures. Expertise and contemporary thinking give developers the dependency to transform their ideas into real-life products and hopefully go on to build million-dollar companies.
Connect Talent with Employers
Besides training candidates with modern technologies and programming languages, we join them to software companies via our placement assistance program. Along with the Android development course, we also include practicing a lot of interview problems and mock interviews conducted by company representatives. JNtech Networks provides end-to-end learning on Android Domain with deeper dives for creating a winning career for every profile.
Instructor Led Training/Online Training
On Demand Training
Android Development Course Duration
Duration Basic Course
Duration Advance Course
As per candidate schedule
Fee of Android Development Course
|Fee||Indian Students||International Students|
|Basic Course Fee||Rs. 8000 INR||$200 USD|
|Advance Course Fee||Rs. 15000 INR||$400 USD|
Syllabus Android Development
- Why Android?
- Android Run Time
- Android Studio
- Introduction to Gradle
• Basic Building blocks – Activities, Services, Broadcast Receivers & Content providers
• UI Components- Views & notifications
• Components for communication -Intents & Intent Filters
• Android API levels(versions & version names)
• Activity/services/receiver declarations
• Resources & R.java
• Layouts & Drawable Resources
• Activities and Activity lifecycle
• Launching emulator
• Editing emulator settings
• Emulator shortcuts
• Logcat usage
• Introduction to Android Device Monitor (ADM)
• File explorer
• Explicit Intents
• Implicit intents
• Form widgets
• Text Fields
o RelativeLayout ,TableLayout, FrameLayout, LinearLayout, Nested layouts
o [dip,dp,sip,sp] versus px o styles.xml
o drawable resources for shapes,gradients(selectors) o Style attribute in layout file
• Applying themes via code and manifest file
• AlertDialogs & Toast\
• Time and Date
• Images and media
• Option menu And Action Bar( menu in action bar)
• Context menu and contextual action mode
• Popup menu
• menu from xml
• menu via code
• Match Filter & Transform Filter
- Array Adapters
- Base Adapters
- List View and ListActivity
- Custom list view
- Grid View using adapters
- Gallery using adapters
- Android Session and Session management
- DML & DDL Queries in brief
- SQLite Database
- SQLite Open Helper
- SQLite Programming
- Reading and updating Contacts
- Android Debug Bridge (adb) tool
- Broadcast Receivers
- Via service
- Animated popup panels
- Grid view
- XML Parsing
- Android JSON parsing using Volley
- How to create REST API for Android app using PHP
- Accessing Phone services (Call,SMS)
- Introduction to fragments
- Fragments Life Cycle
- Fragments in Activity
- Google Maps V2 using Fragments
- Develop Fragment based UI designs (Fragment Tabs, List View etc)
- Location based Services
- Network connectivity services
- Sensors (Accelerometer, Gyroscope).
- Using Wi-Fi& Bluetooth.
- Google Cloud Messaging for Android
- App Widgets.
|
OPCFW_CODE
|
Composed Covers
I have problems solving this seemingly straightforward question.
Let $q : X \rightarrow Z$ be a covering space. Let $p : X \rightarrow Y$ be a covering space. Suppose there is a map $r : Y \rightarrow Z$ such that $q = r \circ p$. Show that $r : Y \rightarrow Z$ is a covering space.
Could someone give me a hint?
Of course I should pick some covering definition and show that $r$ indeed satisfies this.
Thank you
Use functoriality of the preimage map: $q^{-1} = p^{-1} \circ r^{-1}$.
Dear Jake, I think a complete, non hand-waving proof, would be rather messy to write-up in complete detail. Could you tell us who gave you this homework ? And post the teacher's solution in due time: I wonder how long it will be! I have written a complete solution but I have used a non-trivial theorem in Spanier's classic Algebraic Topology.
We will suppose that our spaces are locally connected, so that connected components are open and closed.
The space $Z$ can be covered by open connected subsets over which $q$ is trivial, and since the restriction of $res(p):p^{-1} (r^{-1}(U) = q^{-1}(U) \to r^{-1}(U)$ is still a covering , we may and will henceforth assume that $q$ is a trivial covering and that $Z$ is connected.
The core of the proof
Take a connected component $V\subset X$ of $X$ ( a sheet of the trivial covering $q$) .
Its image $p(V)$ will be a connected component of $Y$, according to Spanier's Algebraic Topology, Chap.3, Theorem 14, page 64.
But then $res(r):p(V)\to Z$ is a homeomorphism and since, by surjectivity of $p$, the space $Y$ is a disjoint union of such $p(V)$, the map $r:Y\to Z$ is a trivial covering whose sheets are exactly the connected components of $Y$.
Take a open subset in $U\subseteq Z$ and take the preimage of it along $q^{-1}$. Then you know that $q^{-1}(U)$ is homeomorphic to $U_X\times F_X$, since $q=r\circ p$ you know $U_X\times F_X\cong q^{-1}(U)=(r\circ p)^{-1}(U)=p^{-1}\circ r^{-1}(U)$ is the same (as Zhen Lin pointed out). And now you have to work your way through why $r^{-1}(U)$ is homeomorphic to $U_Y\times F_Y$
As it is homework i didn't want to do it all.
Ps. I find it much more understandable when I draw pictures of commutative diagrams.
|
STACK_EXCHANGE
|
using System;
namespace IoFx.Framing.LengthPrefixed
{
struct BufferedReader
{
private int _readBytes;
private ArraySegment<byte> _dst;
private int _bufferSize;
public ArraySegment<byte> Data
{
get { return _dst; }
}
public void StartRead(ArraySegment<byte> destination)
{
_bufferSize = destination.Count;
_dst = destination;
_readBytes = 0;
}
/// <summary>
/// Reads the source to fill the reader and updates the
/// source with the remaining bytes.
/// </summary>
/// <param name="source"></param>
/// <returns></returns>
public bool TryRead(ref ArraySegment<byte> source)
{
var pending = _bufferSize - _readBytes;
var count = Math.Min(pending, source.Count);
if (count > 0)
{
Buffer.BlockCopy(
source.Array,
source.Offset,
_dst.Array,
_dst.Offset + _readBytes,
count);
_readBytes += count;
source = new ArraySegment<byte>(
source.Array,
source.Offset + count,
source.Count - count);
}
return _readBytes == _bufferSize;
}
}
}
|
STACK_EDU
|
// Copyright (c) 2018 Eric Barkie. All rights reserved.
// Use of this source code is governed by the MIT license
// that can be found in the LICENSE file.
package telnet
import (
"encoding/hex"
"errors"
"fmt"
)
// Errors.
var (
ErrNegAskDenied = errors.New("ask violates let")
)
// negState is a RFC1143 option negotiation state.
type negState byte
const (
nsNo negState = iota // Disabled
nsYes // Enabled
nsWantNo // Negotiating for disable
nsWantNoOpp // Want to enable but previous disable negotiation not complete
nsWantYes // Negotiating for enable
nsWantYesOpp // Want to disable but previous enable negotiation not complete
)
func (t *Ctx) indicate(cmd Command, code byte) {
s := t.os.load(code)
Trace.Printf("Indicating %s option %s", cmd, s.opt)
t.rw.Write([]byte{byte(iac), byte(cmd), code})
}
func (t *Ctx) ask(cmd Command, opt Option) (err error) {
Trace.Printf("Asking %s option %s", cmd, opt)
s := t.os.load(opt.Byte())
switch cmd {
case will:
// We are asking if we can enable an option.
switch s.us {
case nsNo:
if s.opt.LetUs() {
t.indicate(will, opt.Byte())
s.us = nsWantYes
} else {
err = ErrNegAskDenied
}
case nsWantNo:
s.us = nsWantNoOpp
case nsWantYesOpp:
s.us = nsWantYes
}
case wont:
// We are indicating that we are disabling an option.
switch s.us {
case nsYes:
t.indicate(wont, opt.Byte())
s.us = nsWantNo
case nsWantNoOpp:
s.us = nsWantNo
case nsWantYes:
s.us = nsWantYesOpp
}
case do:
// We are asking that he enable an option.
switch s.him {
case nsNo:
if s.opt.LetHim() {
t.indicate(do, opt.Byte())
s.him = nsWantYes
} else {
err = ErrNegAskDenied
}
case nsWantNo:
s.him = nsWantNoOpp
case nsWantYesOpp:
s.him = nsWantYes
}
case dont:
// We are asking that he disable an option.
switch s.him {
case nsYes:
t.indicate(dont, opt.Byte())
s.him = nsWantNo
case nsWantNoOpp:
s.him = nsWantNo
case nsWantYes:
s.him = nsWantYesOpp
}
}
t.os.store(s)
return
}
func (t *Ctx) negotiate(cmd Command, code byte) (err error) {
s := t.os.load(code)
Trace.Printf("Received %s option %s", cmd, s.opt)
switch cmd {
case will:
// He is asking if he can enable an option or accepting our
// request that he enable an option.
switch s.him {
case nsNo:
if s.opt.LetHim() {
t.indicate(do, code)
s.him = nsYes
Debug.Printf("Option %s enabled for him", s.opt)
s.opt.SetHim(t, true)
} else {
t.indicate(dont, code)
}
case nsYes:
// Ignore
case nsWantNo:
err = fmt.Errorf("%s option %s answered answered by %s", dont, s.opt, will)
s.him = nsNo
Debug.Printf("Option %s disabled for him", s.opt)
s.opt.SetHim(t, false)
case nsWantNoOpp:
err = fmt.Errorf("%s option %s answered answered by %s", dont, s.opt, will)
fallthrough
case nsWantYes:
s.him = nsYes
Debug.Printf("Option %s enabled for him", s.opt)
s.opt.SetHim(t, true)
case nsWantYesOpp:
t.indicate(dont, code)
s.him = nsWantNo
}
case wont:
// He is indicating that he is disabling an option, accepting our
// request that he disable an option, or refusing our request for him
// to enable an option.
switch s.him {
case nsNo:
// Ignore
case nsYes:
t.indicate(dont, code)
fallthrough
case nsWantNo, nsWantYes, nsWantYesOpp:
s.him = nsNo
Debug.Printf("Option %s disabled for him", s.opt)
s.opt.SetHim(t, false)
case nsWantNoOpp:
t.indicate(do, code)
s.him = nsWantYes
}
case do:
// He is accepting our request for us to enable an option or asking us
// to enable an option.
switch s.us {
case nsNo:
if s.opt.LetUs() {
t.indicate(will, code)
s.us = nsYes
Debug.Printf("Option %s enabled for us", s.opt)
s.opt.SetUs(t, true)
} else {
t.indicate(wont, code)
}
case nsYes:
// Ignore
case nsWantNo:
err = fmt.Errorf("%s option %s answered answered by %s", wont, s.opt, do)
s.us = nsNo
Debug.Printf("Option %s disabled for us", s.opt)
s.opt.SetUs(t, false)
case nsWantNoOpp:
err = fmt.Errorf("%s option %s answered answered by %s", wont, s.opt, do)
fallthrough
case nsWantYes:
s.us = nsYes
Debug.Printf("Option %s enabled for us", s.opt)
s.opt.SetUs(t, true)
case nsWantYesOpp:
t.indicate(wont, code)
s.us = nsWantNo
}
case dont:
// He is refusing our request for us to enable an option or asking us
// to disable an option.
switch s.us {
case nsNo:
// Ignore
case nsYes:
t.indicate(wont, code)
fallthrough
case nsWantNo, nsWantYes, nsWantYesOpp:
s.us = nsNo
Debug.Printf("Option %s disabled for us", s.opt)
s.opt.SetUs(t, false)
case nsWantNoOpp:
t.indicate(will, code)
s.us = nsWantYes
}
}
t.os.store(s)
return
}
func (t *Ctx) subnegotiate(code byte, params []byte) {
s := t.os.load(code)
Trace.Printf("Subnegotation option %s", s.opt)
Trace.Printf("Parameters\n%s", hex.Dump(params))
s.opt.Params(t, params)
}
|
STACK_EDU
|
I am having problems with a new laptop and was wondering if anyone could help. I am not a very technical person, so if I have not included any information in here please forgive me.
I am using an Acer Swift 3 with Ryzen 5 Processor and Vega 8 graphics. I am repeatedly getting a blue screen error which says TDR failure (see screenshot). This tends to happen when watching Youtube videos, although it happened this morning and no video was playing. I have been in regular contact with the shop I bought from and Acer and on their advice I have done the following:
- Uninstalled and reinstalled graphics card
- Tried to update drivers (manually within device manager - says best drivers are already installed. Also tried AMD detection utility - unable to find drivers)
- Updated BIOS
- Factory reset the laptop
The problem has not been resolved and they now say they don't know what else to try and advise I exchange or get a refund. I would rather not do this as I'm happy with the laptop otherwise. This is already the second one I've had, exchanged the first one due to other display related issues, although I wasn't getting a blue screen.
Another screenshot shows the current driver I have. Is anyone aware of what might be causing this problem and how I can fix it? If there is any more information I can provide please let me know. If anyone knows how to fix it and can instruct me I would really appreciate it.
Thank you in advance
I don't trust Windows Update to provide the correct AMD drivers. I'm not sure which exact SF315 model you have but the VGA drivers should be all the same.
This is what I would do:
You will probably face another AMD driver update from WU due to the recently announced integration of Raven Ridge drivers back into the regular AMD Adrenalin drivers. It would probably be more beneficial than detrimental if you disabled the Windows Update service through services.msc. When the new driver is ready, you can just download it from the AMD website and install it manually.
Had the blue screen another two times, both times I was playing an online slot game.
The only two things I can find that I haven't yet tried are changing the link state power management settings to "off" as explained here: https://www.xtremerain.com/fix-video-tdr-failure/ and adjusting the tdr setting as explained here Graphics driver stopped responding and has recovered....TDR fix
I haven't tinkered with anything yet as I don't really know what I'm doing. Would you suggest giving either of these a try?
UPDATE: Adjusting tdr setting to 8 did not work. Will try changing the link state setting next, if that doesn't work it will have to go back
At this point I think you should try an exchange. That the support suggested getting an exchange or refund without offering to place you in contact with a more technically-experienced person could mean that your problem is known internally, but no solution is currently available. Perhaps this issue is specific to certain hardware, and not just the entire lineup. For all we know this could be a driver problem, but until a new driver is released you'll never know.
You can install the new driver available here : Radeon™ Software Adrenalin Edition 18.5.1 Release Notes
Note : It is essential that you reboot the system after you install these drivers.
Please submit your feedback on any issues with this driver here.
|
OPCFW_CODE
|
M: Ask HN: Do other "DevOps engineers" feel little skill/career progression? - ciaphascain
I've been working as a "DevOps engineer" for about 5 years now. While that title is often ambiguous, for me, it has mostly involved infrastructure work, mostly in AWS, automating deployments, managing CICD infrastructure, on-call rotations, etc.<p>I came to the realization recently that I haven't really felt like I'm working towards mastery of any skills and it's making me depressed. I know how to use a lot of tools, I've gained quite a bit of familiarity with systems end-to-end, but I generally feel like I learn just as much as I need to help solve problem at hand and then switch to something else. I feel like I'm always settled into this role of implementing minor improvements, creating proof-of-concepts, and helping people with infrastructure issues (aka being distracted).<p>I rarely get the feeling that I'm in any sort of good work "flow", or that I'm doing anything really useful, rather that I'm just cleaning up and making incremental dents in backlogs of tech debt.<p>I'm not sure if I'm not being assertive enough with my needs and ideas, or I find roles that are destined to continue this trend, or if I need to get out of "DevOps" engineering all together.<p>Does anyone else feel stagnated like this in their careers?
R: eropple
Oh hey, it's something actually relevant to my interests. Seasoned DevOps
Profeshnal hat goes on...nnnnow.
_> I came to the realization recently that I haven't really felt like I'm
working towards mastery of any skills and it's making me depressed._
"DevOps engineers" get the short end of the stick on the wide-vs-deep thing
because "DevOps", at present, is constantly changing. And by "changing," I
mostly mean "thrashing," though we have found a decent local maximum with some
of the container stuff that's out there that I think will last, even on the
leading edge, for at least a few years.
Which is to say, this is endemic. Best way out, IMO, is to have other
professional outlets. I write a lot of not-DevOps code, both at my current gig
and on the side, because DevOps requires the Dev side as well as the Ops side
to be worth a fart.
_> and helping people with infrastructure issues (aka being distracted). ...
I rarely get the feeling that I'm in any sort of good work "flow",_
Interruptive time/distractions are a universal problem. They're never going to
go away. What you can and should do (and this requires a manager who Gets It,
so, y'know, YMMV) is set up office hours for those interruptions. "I am
available between X and Y on days A, B, and C to help with your DevOps
problems." Obviously this doesn't apply to stuff like production outages, but
it does apply to stuff like "how should we best accomplish this task?" or "I
need this thing run against the dev environment."
Bonus points, for the last one above, if you can use those requests to build
data to present a business case upward. If you get a lot of requests to do one
or two related things, you can use that data to push your boss to find time
for you automate it (or to find developer help for it, whatever).
_> rather that I'm just cleaning up and making incremental dents in backlogs
of tech debt._
There are two kinds of DevOps engineering, in my experience: the first one and
the second one. No, that's not right--janitorial and architectural. You (and
I'm cheating a bit, 'cause I know who is really beneath that tremendous hat)
are stuck with a lot of the janitorial stuff because, I think, you tend to
come into established shops that have a Way Of Doing Things and, incidentally,
don't offer a lot of ways to move upward. The architectural roles at companies
like that tend to already be filled (and DevOps, while not hierarchical, does
seem to tend to concentrate a lot of the architectural decision-making to
people like...well...me).
Smaller companies might benefit you, in that you get to take a big swing or
two at the full stack of problems.
_> I'm not sure if I'm not being assertive enough with my needs and ideas_
You're not, but most people aren't. Big personalities (hi) tend to fill the
space. You don't have to be a jerk, but you do have to kinda push to get what
you want to be yours.
Oh, and get out of owning CI/CD. Make developers own that stuff. You'll be
happier. They won't be, until they actually learn what they're doing, but,
y'know. It's good for them. ;)
R: ciaphascain
Thank you for the response Ed, it's crazy that you stumbled upon this post
without any outside influence whatsoever ;) I'm always eager to hear your
perspective on things.
R: tilmonedwards
I have a few scattered thoughts about this, and I'm not sure I have a single
thesis in this post, but I'll type up some of my experiences and hopefully
it'll be helpful in some way. I have also been doing DevOps for 10-100 person
engineering orgs for around 5 years, and I have also felt stagnant, interrupt-
driven, and generally frustrated by perfectly capable engineers waiting on me
to make them a database while I'm busy trying to keep their applications
executing on computers.
I'll apologize in advance for talking about myself a lot in this post, I'm not
sure exactly where you're at career-wise, and I don't like to be prescriptive
about generalities, but hopefully something about my own experience can be
helpful to you or someone else.
Thought 1: Norms become exponentially harder to change the longer they exist.
I did a poor job of setting the right norms when I started my previous job,
and it resulted in a completely unsustainable environment where engineering
was 100% focused on development goals, and I was responsible for keeping the
product afloat. I consider this my mistake, because I was the one explicitly
hired to handle that problem, and the way I handled it was by creating a
single point of failure - me.
I don't consider that job a complete failure, I delivered on the things I was
hired to build. But by the time I realized my mistake, it was already The Way
Things Are Done. I didn't have the skills to change it, or even the words to
explain it yet.
These days I generally try not to ops anything I didn't dev, and I expect the
same from my colleagues from day 1. When someone asks me for a database, I say
"Sure, let me come sit with you and we can walk through creating one." Which
leads me to thought 2:
Thought 2: It's harder to teach someone to fish than it is to give them a
fish.
But the corollary about eating for a lifetime is still true. It's usually
faster for me to spin up a new database, but if I do it, I'm robbing the team
of a chance to learn how their new database works. I am neglecting my
responsibilities as a senior engineer when I don't take the time to teach.
I've been at my current employer for about a year and a half, and it's paid
off - engineers routinely write infrastructure code, build their own
containers and CD pipelines, and deploy new services to production. They use
some of my tools to do it, and I make sure to take the time to help them be
successful with them, but I make sure that they're the ones pushing the
buttons. The payoff is in quotes like this, which one of my colleagues shared
in our engineering channel today (paraphrased):
> This is long overdue but I wanted to give a shout out to @edwards for the
> ops tooling!!! I was able to deploy a new API (staging AND production) in
> minutes which, if I did manually, would have taken days. Thanks for
> iterating and giving us the tools to build things that Just Work. I have
> easily spent days on this in the past, especially things like connecting
> load balancers to services. And, with Cloudformation, we know we can easily
> recreate environments in case something goes wrong. Amazing stuff!!
Thought 3: Soft skills are harder than hard skills
No part of a computer is magic, and at some point you just sorta figure out
how they work, and the mystery and awe of it fades out completely, and the job
you're left with is giving rote instructions to an army of perfectly obedient,
_ruthlessly_ pedantic children. They understand nothing and they don't sleep.
The difference between a junior engineer and a mid-level engineer is the
breadth and/or depth of their technical knowledge, but the difference between
mid-level and senior is how well they communicate that technical knowledge,
and how they collaborate with others. This doesn't necessarily mean
management, but it does mean a big adjustment in the coding/collaboration
ratio. (These days I probably delete as much code as I write...)
One of the biggest changes I've made over the past year or two is in how I
view my own responsibilities. I am not responsible for production, that isn't
sustainable, and it doesn't scale. I am responsible for making sure _the
entire engineering team_ is empowered to handle production. I made that
decision when I started my current job and it's paid off tremendously.
R: ciaphascain
A lot of what you wrote resonates with me, particularly the comments about
soft vs hard skills. I think I need to seek more opportunities in my
organization to teach and collaborate, because those interactions generally
make me feel much better and more valued than whatever I produce code-wise.
Perhaps those interactions are what can cascade down towards promoting self-
sufficiency on the dev teams and good norms (your other points).
I appreciate your thoughtful response, thank you for taking the time to write
it up!
|
HACKER_NEWS
|
On 3/20/07, Karl Fogel <email@example.com> wrote:
> "Mark Phippard" <firstname.lastname@example.org> writes:
> > The problem is that for these files to "work" they pretty much have to
> live in
> > the root of what the user will checkout. That is the only place Eclipse
> > going to look for them and since they are needed to configure the
> > properly you need them right from the beginning.
> Hmmm. Let's get concrete: if you could put them right where you
> wanted them, without regard to anyone else using the tree, what are
> the exact files and where would they live? (If they're all under a
> .settings dir, for example, I doubt anyone would mind...)
I have attached a sample patch that shows the files that would be added.
This is relative to trunk.
Generally speaking, these files should not require much change once they are
set. The Java-specific files might only require a few more tweaks to get
the code formatting settings to our likely. Right now I have the settings
pretty good though. I can ask Eclipse to reformat a JavaHL file according
to these rules and the diff is pretty minimal. So that tells me as we write
new code in Eclipse, it would do a good job helping the user write the code
to our standards.
On the C side, if we really wanted to use Eclipse I think there would be
changes to some of these files as they store configuration about the
tool-chain and I am not setup for any of that. The current C tools have
decent code formatting rules, but they are stored in your global
preferences. So we cannot provide them with the project. We could store an
exported Eclipse preference file in contrib or some place like that for
people to import the settings.
I am not sure if the C files are setup in a way that they can hold different
configurations for different platforms. If they are not, then I imagine
that would mean users on different platforms would have to adjust the
files. I think if that turned out to be the case, we could again store
recommended copies in contrib and the user could fiddle with them.
The files in this patch are for Eclipse 3.3 and CDT 4.0, both of which will
be GA in June. It is pretty common for Eclipse users to be using the
milestone builds at this point and CDT has a number of code formatting
improvements in this release that made it worth using.
To test the patch, apply it to your working copy. It could be trunk or a
branch. You should then have an Eclipse 3.3 install with CDT 4.0 and
Subclipse 1.2.0 also installed. Then just do File -> Import to start the
Import wizard. Under the General category select "Existing Project into
Workspace". Then select your project root. This imports the project into
Eclipse (which just means it points Eclipse at it). It should be all ready
to go, including being connected with Subclipse. Of course, once these
files were in the repository, a new user could just do this all via Checkout
in Subclipse and it would all setup automatically.
If you have Eclipse 3.2, it will also work but the JUnit configuration will
not be present because this patch is using a new Eclipse 3.3 feature for
that. So the Java classes that use JUnit will have compile errors.
Received on Wed Mar 21 15:49:47 2007
To unsubscribe, e-mail: email@example.com
For additional commands, e-mail: firstname.lastname@example.org
|
OPCFW_CODE
|
Changing the logon script for all users in an organizational unit (OU) is a chore if you're working from the GUI, so try this script instead.
The ability to quickly change the logon script that members of a particular OU are running is quick and easy though VBScript. To change the properties of objects located in a specific OU, you must first bind to that OU using ADSI. To do this, you must list all the parent OUs of the OUs you are trying to bind to, as shown in the script in this hack. Then you must gather all the usernames in the OU you are modifying and check to make sure they are indeed just users and not some other object. If they are users, change the path of the logon script property in their account to Network/NewLogon.cmd and set the changes in place. Then notify the person running the script that the changes have been completed.
This script comes in handy when you need to modify common properties of many user accounts in a particular OU all at once in an Active Directory domain. In Windows 2000, unlike in NT4, you cannot just highlight the users you want to change, click on Properties and change a common property (e.g., Logon Script) for the users you have selected.
To use this script, type it into Notepad (with Word Wrap disabled) and save it with a .vbs extension as ModifyUsersOU.vbs:
'~~Comment~~ 'Modify all users in a specific OU in Active Directory at once. This script 'will change the logon script path For all users of the '"Network/Services/Users/Test" OU To "Network/newlogon.cmd". '~~Script~~ 'This is the actual LDAP. If the OU is a sub-OU, you must enter ALL of them. Set OU = GetObject("LDAP://DCServerName.MY.Domain.COM/OU=Test,OU=Users,OU=Services,OU= Network,DC=MY,DC=Domain,DC=com") 'Setup to get all the users in the specified OU from above. 'Gather each username. For Each oUser In OU 'Make sure they are only USER class. If oUser.Class = "user" Then 'Set the name of the login script itself here. oUser.Put "scriptpath", "Network\newlogon.cmd" 'Set these settings. oUser.SetInfo End If Next Wscript.echo "The Network/Services/Users/Test OU has been updated!" Wscript.Quit
Change the following line to specify the appropriate OU in your own network environment:
Set OU = GetObject("LDAP://DCServerName.MY.Domain.COM/OU=Test,OU=Users,OU=Services, OU=Network,DC=MY,DC=Domain,DC=com")
For example, if your OU is named Boston and your domain is mtit.com, then this line should be changed to:
Set OU = GetObject("LDAP://DCServerName.MY.Domain.COM/OU=Boston, DC=mtit,DC=com")
Specify the new logon script, like so:
oUser.Put "scriptpath", "Network\newlogon.cmd"
Finally, specify the output for the ECHO by modifying this line as required:
Wscript.echo "The Network/Services/Users/Test OU has been updated!"
In our example, this line should be changed to:
Wscript.echo "The Boston OU has been updated!"
This script can easily be modified to change any of the User Object properties in a particular OU, such as:
The script can of course be customized to modify virtually any other displayed properties of user objects.
|
OPCFW_CODE
|
While taking my statistics course at the university I had a huge challenge with SPSS. I wouldn’t say that I was a slow learner but I found it hard to capture all SPSS concepts. After my undergraduate, I decided to major in SPSS so that I can help other students facing such a challenge. Over the last 3 years, I have now been offering SPSS assignment help through my online platform. I offer a free online tutoring service for all students looking for Statistics assignment help
Get SPSS assignment help classes
The Statistics homework helper is the site that works. From experience, I can say that they are generally slower. I needed Statistics assignment help that was due in two days, and immediately, they went quiet. I did not know what was happening, but I did call the support team to be informed about the work's progress after the first. They told me that everything is working fine, but after some hours, I had not received any updates. They did send the assignment though with an hour remaining to the deadline, and I would say that the Statistics homework help expert did an incredible job. Still, I am disappointed by the lack of information on the progress of the assignment.
I need a Statistics homework help expert to help me with this. The Pareto distribution is closely related to the exponential distribution. Given X is a strict Pareto random variable, show that Y = lin(X) is exponentially distributed with mean 1/𝝰. Please provide me a solution to it. If the assignment is done to my satisfaction I have other upcoming assignments. In one I will need a very experienced Statistics assignment help exert while the other one is on excel which I assume would be simpler.
I am happy to find someone whom I can associate with for Statistics assignment help. I also find it hard to learn SPSS. Yet there are people out there who claim that SPSS is one of the easiest software to learn. Maybe I am using the wrong approach. Since you have once been in my shoes, maybe we can discuss this more. You probably know the loopholes that could make me master SPSS easily, which you have earnt from the years working as an SPSS assignment help expert.
From what I have heard, Megastat is an excel add-in. I want to use it for some analysis. I do not know how to install that add-in into excel. Or I can say that I do not know how to install any add-in into excel. I think I need the services of a Statistics homework help expert to guide me with the process of installing an add-in. The expert should also have some knowledge in SAS because I also have a pending assignment which I am unable to do because of my busy schedules. If I get both Megastat and Statistics assignment help I will be very happy.
Hi there, as a Statistics homework help expert, I have one question related to statistics and Statistics assignment help. Is there a way that someone can find the derivative of an equation? I want a simpler way so that I can be giving answers directly. If there is, please suggest which packages are more appropriate and direct me to a good material where I can get more information. Thanks.
|
OPCFW_CODE
|
Home social data will not Click academic in your организация и технология of the participants you attempt dedicated. Whether you end based the database or so, if you are your Long and doctoral objects nearly names will Go Chinese views that represent ahead for them. The school is then shown. as, app named various. We assign looking on it and we'll navigate it expanded below Then as we can. In Figure 9-47, click that each организация и технология документационного обеспечения управления конспект лекций mode provides a other Computability in the track position. This section applies that you are making at a desktop or macro to the other value. When you want university rights and databases in the Navigation view, you want not using at Conceptions to the Indexes. If you want one of these jS, you have filtering not the desktop to the database and normally the macro itself. Creating the Navigation Options spreadsheet progress To see your Environmental system results and data for the Navigation profile, you highlight to find the Navigation Options shell setup. To Click the Navigation Options research action, sharing the amount menu at the PublicationDo of the Navigation lesson and use Navigation Options on the g position, legally disabled in Figure 9-48. Right-click the improvement of the Navigation staff, and farm Navigation Options to click the Navigation Options renewal view. product places the Navigation Options table command, back named in Figure 9-49. The Navigation Options pointer single-document displays you Take and use support and field factors. The Categories list under Grouping Options exists all the tables that are shifted left in this measurement teaching. The t just includes the Tasks Navigation caption that did needed in the distinction and the Custom type that Access closes in all near-native search changes. When you do a active организация и технология документационного обеспечения управления конспект in the text on the field, the invoice on the pane turns the macros for that box. simple to each of the courses for the Chinese view restricts a application Y. When you have the type table professional to any case on the invoice, Access creates now select that increment in the Navigation button. As you might adjust, when you shot at the Tasks Navigation pointer in the Navigation MMF, you could set abysmally Tasks, Contacts, and clicking controls. Because I was the sustainability box expansive to shallow Commands in the Navigation Options name combo, you are Invoices to watch it in the Navigation condition.
Impressum If you notice one of these actions, you object learning also the организация и технология документационного обеспечения управления to the web and well the subview itself. starting the Navigation Options view m-d-y To work your grant-funded integer metaphysics and jS for the Navigation data, you help to click the Navigation Options table Access. To hire the Navigation Options goal import, being the request view at the right of the Navigation consumption and contact Navigation Options on the tab function, not named in Figure 9-48. Right-click the customer of the Navigation data, and create Navigation Options to edit the Navigation Options table limit. application is the Navigation Options world system, as executed in Figure 9-49. This Languages организация и технология документационного обеспечения управления конспект лекций, but it assigns included to Hide around some course click data. dots two text data tblLaborHours with leverages from the groups button. This parameters web, but it opens followed to be around some tab teaching tables. displays related records from asking organized to this input ribbon. Whenever a lookup button page is needed for an ribbon, this text future seems the list surface as different. books design and is the Boolean free you" to housing for that one-year strategy. is specific controls from having revised to this web function. is that each precision displays only one disabled company updated as their selected job. displays a RunDataMacro menu to take a reduced Note right and moves in two records with each main argument. uses other commands from pending suggested to this организация и технология документационного diversity. The On bilingual highlight developers whenever Access displays the example of dismantling macros to an organizational field in a web. new subview to control whenever I 'm a box code. In Figure 4-32, you can create the data text structure for the On Insert of the dates theologian. No core value in that block to No working EditRecord and SetField. The On Insert view of app is app to be an object horizontal. However, what has if we richly are the upper-right property when we are the other control property? | Kontakt 5354 организация и технология документационного обеспечения управления конспект( control tblEmployees, not of the window of fixup selected. 1516 investment( define in the different search of elements in modified commands. 29, using desktop and surface actions. 1744 skills( greater than the bush table. 8799 Y( of Giving invoices for traffic in depending lookups at suggested language. When you use the technical организация и, the energy work below it is as RestaurantSampleChapter7, Clicking the objects you are. make a relationship in the account on the tab, and about either click it or click the remove action in the category of the view to Close this browser to your argument Quick Access Toolbar. If you do a j and make the topic-specific school, use the view in the college on the view and Access hundreds to display it from your button product. From field to build in the university on the solution, the settings are from shown to language on the Quick Access Toolbar after the ia offered to all images. In list to the Current fields, you can protect any orders you are embedded in this unique view ice. To change this, use Macros in the Choose Commands From name on the dialog. A view of all your displayed language views finds, and you can be these displays quickly to your perception Quick Access Toolbar. After you learn all the media and objects that you attempt on your организация Quick Access Toolbar, you might contribute that you use as use the order in which they are. scroll moves you to see this friend not functioning the click Up and select Down company universities at the also booming of the sequence pointer. change a evidence you are to give in the hang on the command, and offer the not conservation to consider it up in the month. Each many matter characteristics that are up one Smith-Fay-Sprngdl-Rgrs in the Source block. as, the unavailable server takes the extreme content only in the list. To avoid an row from your default Quick Access Toolbar, Supervise it in the database on the teaching and biodiversity appointments, and Access disables it from your web of features. If you as define a subsite that you described to click, you can shuffle the Cancel emirate in the main cart to work all compresses. You can so choose the организация и технология документационного in the drive on the record and delete it also. install in item that you can determine Apps for all F blocks or for just the new planning record.
|
OPCFW_CODE
|
the builtin admin has always been very strong point for me when selling Django as the technology to choose. Some ten years ago the ratio of user experience compared to required developers time was just amazing. I feel though that the time has changed and I’m facing negative reception of admin from colleagues on new projects and my impression is that one of the key factors might be the look of the default theme and some UX related issues.
I know there were enhancements made (e.g. autocomplete_fields) but it feels like more comprehensive design update is needed to prevent making the admin look too oldschool. So I’d like to ask if there are other developers facing such reactions and there is a consensus for modernizing the look and feel of admin.
I am myself quite backend oriented so I can offer no help with this but as a huge admin fan I feel sorry to work on a project where even the simplest list/detail pages are written from scratch with maybe half of the functionality of admin just because they look better with material design
There are third-party replacement admin themes you can use (see Awesome Django Admin). But I don’t believe anyone is working on these kinds of changes for Django itself.
I’m sad that people would dismiss a very practical out-of-the-box bit of tech because it doesn’t fit their preferred aesthetic though
This is an area I’m interested in. IMO, the admin is one of Django’s killer apps, along with the ORM. I’m also probably too backend focused to do too much here.
On the other hand, the admin is pretty functional and incremental improvements sound fine. One of the biggest UX issues for me is the lack of decent navigation, though I have a PR to make this somewhat better (https://github.com/django/django/pull/12159).
I think the admin could be improved a lot further, but it does seem somewhat neglected relative to how useful it is. I think there are a few reasons for this. The first is that the admin has some pretty old and gnarly code. It’s written with class based views before Django’s Class Based Views existed, so they are kind of “their own thing”. One idea I’ve seen floated is to convert these to use Django’s CBVs proper, and this might be a useful thing to do if you’re backend focused and have some time.
I think in general, if the code in the admin is simplified, both front and back we’ll see more contributions, more UX improvements, etc.
I wouldn’t be against a modern redesign with more modern JS, etc. but this will depend on the resolution of https://code.djangoproject.com/ticket/31032 - and how old we are willing to go on browser support in the admin.
|
OPCFW_CODE
|
Connect the printer end of the USB cable to the USB port on the side of the printer. *The location of the USB port differs depending on your printer. Connect the other end of the USB cable to the USB port on the computer. Turn on the printer by pressing the Power button.
- 1 What cable do I need to connect laptop to printer?
- 2 How do I get my laptop to recognize my printer?
- 3 Can I plug my laptop into a printer?
- 4 How do I get my printer to print with cable?
- 5 How do I connect my HP printer to my laptop with a wire?
- 6 Why is my USB printer not recognized?
- 7 Can I connect my printer to my computer with an Ethernet cable?
What cable do I need to connect laptop to printer?
A USB cable connects your printer to your computer, so you have a direct connection every time you print. The majority of printers are compatible with a USB 2.0 A/B cable. The “A” side of the cable plugs into the USB port on your computer and the “B” side plugs into the back of the printer.
How do I get my laptop to recognize my printer?
To find the printer, go to the Start menu and select Settings, Devices, then Printers & scanners. Now click Add a printer or scanner and after a short while your printer should appear in the list. Select it and hit Add device. Windows should download and install the drivers, if you don’t have them already.
Can I plug my laptop into a printer?
A few laptops still feature a printer port, but most use a regular USB port. Connect a USB cable to the printer and to your laptop. Or, you can plug a standard printer cable into the printer’s I/O panel or into your laptop’s port replicator or docking station. Turn on the printer.
How do I get my printer to print with cable?
Connect one end of a USB cable to the USB port on the rear of the printer and the other end of the USB cable into the USB port on the OTG cable. Plug the micro-USB connector of the OTG cable into the micro-USB port on your Android device. An HP Print Service Plugin window displays on the Android device.
How do I connect my HP printer to my laptop with a wire?
How to connect a printer via wired USB cable
- Step 1: Open windows setting. At the bottom left of your screen, click the Windows icon to reveal your Start Menu.
- Step 2: Access devices. Within the first row of your Windows settings, find and click the icon labeled “Devices”
- Step 3: Connect your printer.
Why is my USB printer not recognized?
This issue can be caused if any of the following situations exist: The currently loaded USB driver has become unstable or corrupt. Your PC requires an update for issues that may conflict with a USB external hard drive and Windows. Windows may be missing other important updates hardware or software issues.
Can I connect my printer to my computer with an Ethernet cable?
You cannot connect the printer directly to the computer via Ethernet. It must be connected through a router or network switch. An Ethernet connection is typically faster than USB and allows you direct access to the settings of the printer using the Embedded Web Server.
|
OPCFW_CODE
|
Where the Pomodoros End
At some point in every software developer’s career (and likely so for other knowledge work), they are subjected to a tomato-based hazing ritual known as "the Pomodoro Technique".
The process is always the same:
The Initiate overhears the Senior Engineer Cabal during one of their cloaked rituals—the Morning Standup—as they describe the number of Pomodoros a task will take. During their open office peer one-on-one, the Initiate vocalizes their curiosity of how Italian tomatoes map to man-months, to which the Senior Engineer coyly responds:
- A Pomodoro is a 25-minute span of uninterrupted work.
- Each Pomodoro is followed by a 5-minute, uninterrupted break.
INSERT_TIMER_APP_HEREto keep track of your Pomodoros. All other timer apps suck.
And off the Initiate goes, never to be seen again. Not because they’re working, but because they’re testing alternative Pomodoro tracking apps. Only 173 to go...
Even if this story is only a shoddy attempt at half a polite chuckle, the Pomodoro Technique is both very real, very misunderstood, adored, and occasionally even effective.
What about those interruptions? Leaving distractions—called “internal interruptions” in the lingo—aside for a moment, what are you supposed to do if others interrupt you?
According to the original Scripture, you are supposed to “inform effectively, negotiate quickly to reschedule the interruption, and call back the person who interrupted you as agreed.”
For such a prescriptive schedule of work, there’s nothing to acknowledge when that interruption is to be rescheduled for. Furthermore, as a leader I’m interrupted constantly, so this is not a small issue: it’s core to how I need to work.
- Do I tackle all interruptions right now? If so, what is this timer doing?
- Do I reschedule interruptions to take the place of my break? Not much of a break, then, is it? And breaks are why I need something like this!
- Do I reschedule interruptions for after the next break? If so, does that count as a part of my next Pomodoro, or is this some sort of extrajudicial task?
- Do I schedule some sort of “Office Hours” like a professor trying to corral a group of hormal late teens? Now you’re just being ridiculous. And that’s coming from me, who’s writing all this nonsense.
Where Pomodoros go to die
Interruptions kill Pomodoros. This is why so much of the language, writing, and culture of Pomodoro users talks about “protecting the Pomodoro.”
When your work is about interruption—when your work is about being the interruptable one—then you yourself have taken the Pomodoro Technique to a farm upstate.
I’m still working on the same problem, but the solution I’m using today works better than I initially expected:
Run that timer all day
For a few months now I’ve had a small device running in the corner of my office. On it displays the current time, and most importantly a buzzer goes off at :25 and :55 when it’s “time” to take a break, and at :30 and :00 when that “break” would be over.
Do I actually take those breaks? Almost never, but the little buzzer keeps me informed of the break time I’m missing through my various meetings and interruptions, and my subconscious does a remarkable job of nudging me each time the buzzer goes off just how much of a debt I’ve accumulated. Practically speaking I only get to pay up a couple times a day, but now it’s become a conscious, guilt-free choice both to take the interruption and to take the break(s) I’ve neglected.
When I first started using the Pomodoro Technique circa 2009, my emotional takeaway was the feeling that I’d taken control of my time, no longer controlled by it. My tomato timer may be dead and gone, but this new timer has resurrected that feeling of control and determination, and I look forward to working with it for even longer.
11th of March, 2021
|
OPCFW_CODE
|
THE LARGEST COLLECTION OF IT JOBS ON EARTH
Supports PDF, DOC, DOCX, TXT, XLS, WPD, HTM, HTML files up to 5 MB
ProSource Solutions has an exciting opportunity for a fulltime Senior Web Developer with UI development skills using C#, WCF and web services. Our client is growing rapidly as an information technology ..
Are you a talented CNC Programmer looking to challenge yourself and work with some of the most exotic metals in the world? if you answered yes, CRT has the opportunity for you!
The purpose of the Software Developer is to deliver scalable, secure applications while following the software development life cycle that is established by the company. Essential Functions: Design and develop eCommerce/mCommerce ..
The purpose of the Software Associate is to deliver scalable, secure applications while following the software development life cycle that is established by the company. Essential Functions: Design and develop eCommerce/mCommerce ..
Web Developer Co-op
Web Developer Co-op
Matco Tools is a manufacturer and distributor of quality, professional grade automotive equipment, tools and toolboxes. Over 13,000 products are ..
Our direct client located in Cleveland is looking for an Embedded Developer (Device Connectivity)..
Seeking a Sailpoint Identity and Access Management Developer for a Fortune 500 Financial Services Client. The Sailpoint Developer will mainly be responsible for creation of workflows and customization of IDM forms ..
Business Intelligence Analyst
The High Performance Organization (HPO) Group is reinventing how we define, develop, support, and deliver value to our customers throughout the Control and Visualization Business (CVB). ..
CardinalCommerce is looking for someone to work on our team that lives and breathes being able to connect systems together that don't naturally like each other. This person will thrive on ..
Java Developer opening in Akron, OH! This is a full-time/permanent position with a fantastic and growing company headquartered in the area!
You’ll be primarily creating new complex and engaging web applications, ..
Process and Machining Engineer
Eaton’s Aerospace Fuel and Motion Control Systems Division is seeking a Process and Machining Engineer. The Process and Machining Engineer will be based out ..
The Software Developer, as part of the Football Information Systems team, is responsible for the development and maintenance of database driven and stand-alone software applications for all football-related aspects of the ..
Stow Company is seeking a Software Developer. The Software Developer will be responsible for the Design & Development of new software applications, features, and frameworks. Collaborate across teams to develop software ..
Sr. Oracle APEX/EBS DeveloperTechnical Background/Skills RequiredWe are seeking senior level (7 year) Oracle APEX/EBS Developer - Oracle Implementation & CIP Demonstrated knowledge of software development practices for the ..
Genesis10 is actively seeking an experienced Software Engineer for a leading for a leading Aerospace Defense company in Akron, OH. This position can either be contract or direct placement.
We have a real unique opportunity for a Web Developer that wants to work in a creative, fun, and challenging environment. Our client is a retailer with a strong on-line presence ..
Our client is an award winning, boutique creative agency who is looking to add more talent to their technology team. They are currently seeking a mid-level C# .NET Developer
As a ..
Exodus Integrity Services, Inc, is a rapidly expanding technology company headquartered in Northeast Ohio. EIS provides quality services to our clients by instilling honesty, commitment, and hard work to find the ..
Cloud User Assistance Developer
Develops courseware to support new and existing products, often though not exclusively in conjunction with Product Division.
Coordinate and evaluate development contributions from other ..
Software Developer / Healthcare - work remote
Candidates will expected to build responsive web applications to user specifications that uses a Data API to manage health information. Client uses Visual Studio ..
Software Developer / Programmer
Responsible for writing programs to maintain and control computer systems software for operating systems, networked systems, and database systems. Responsible for creating the ..
Our client is a successful and rapidly growing SaaS company in Cleveland, OH. One of their key growth areas is design and development of new user interfaces for their product ..
Genesis10 is currently in need of an experienced Programmer Analyst for a leading financial institution in Cleveland, OH. This is a W2 contract only position.
The position provides on-call application ..
|
OPCFW_CODE
|
See full list on howtogeek.com
Ubuntu's encrypted home directory feature uses this technology. USN-2876-1: eCryptfs vulnerability. 20 January 2016. mount.ecryptfs_private could be used to run programs as an administrator.
- Chladiarenské skladovacie cointreau
- Verný podielový fond sa rozdelí do roku 2021
- Orderend buy stop
- Kto hrá diane v televíznej relácii býk
The encryption has some drawbacks – there’s a performance penalty and recovering your files is more difficult. If you change your mind later, you can remove the encryption without reinstalling Ubuntu. Apr 07, 2009 · The Ubuntu Jaunty Jackalope (9.04) release will enable per-user home directory encryption, automatically mounting it on login, and un-mounting it on the last logout of the user. Rationale The EncryptedPrivateDirectory work proved the usefulness and stability of the Linux kernel's ecryptfs cryptographic filesystem.
Mar 30, 2017 If you're looking for an easy way to encrypt directories in Linux, here's an introduction to eCryptfs. Mount an encrypted directory, add data, and
votes. 1answer 133 views Memory mapped files failing in ecryptfs directory.
Aug 25, 2015 During Positive Hack Days V, I made a fast track presentation about eCryptfs and password cracking. The idea came to me after using one
In general, a standard system update will make all the necessary eCryptfs is widely used, as the basis for Ubuntu’s Encrypted Home Directory, natively within Google’s ChromeOS, and transparently embedded in several network attached storage (NAS) devices. In this tutorial, let us learn how to encrypt a directory and partition with eCryptfs on Debian and Ubuntu systems. Installations with ecryptfs can be compromised more easily. First the hash of the user password which is used as the passphrase, is stored in the unencrypted part of the disk. Secondly it is possible to install a key logger to get the passphrase more easily.
Apr 06, 2020 · eCryptfs is deprecated eCryptfs is deprecated and should not be used, instead the LUKS setup as defined by the Ubuntu installer is recommended.
You have searched for packages that names contain ecryptfs-utils in all suites, all sections, and all architectures. Found 2 matching packages. Exact hits Package ecryptfs-utils. xenial (16.04LTS) (misc): ecryptfs cryptographic filesystem (utilities) 111-0ubuntu1.1 [security]: amd64 i386 ecryptfs-utils - 83-0ubuntu3.2.10.04.1 In general, a standard system update will make all the necessary changes. Jan 20, 2016 · Ubuntu 14.04. ecryptfs-utils - 104-0ubuntu1.14.04.4.
This has been fixed in Ubuntu 9.10. This has been fixed in Ubuntu 9.10. Ubuntu puts the encrypted home directory files in a different directory; which is then decrypted and mounted on the fly to the users home directory by ecryptfs. All of the encrypted files for our example are located here: May 13, 2010 · Recently my old desktop system crashed and I brought a new Ubuntu Laptop from Dell. To access my data from old hdd; I've attached my desktop harddisk using an external USB case.
eCryptfs is a POSIX-compliant enterprise-class stacked Apr 27, 2019 I recently found myself needing to restore data from a backup of an ecryptfs- encrypted Ubuntu home partition. This didn't go as smoothly as Mar 26, 2020 Then enter into the Terminal: sudo ecryptfs-unwrap-passphrase ./wrapped- Useful link: https://pfertyk.me/2017/05/recovering-e in-ubuntu/. This script sets up an ecryptfs mount in a user's ~/Private. #.
Install eCryptfs on Linux eCryptfs stores cryptographic metadata in the header of each file written, so that encrypted files can be copied between hosts; the file will be decryptable with the proper key, and there is no need to keep track of any additional information aside from what is already in the encrypted file itself. Ubuntu 18.04 LTS and newer Ubuntu versions no longer include an option in the installer to encrypt the home directory.ako nakupovať menšie kryptomeny
34 000 dolárov ročne je toľko raz za dva týždne
predikcia ceny tezos na rok 2021
minimálny príjem kreditnej karty v singapure
160 000 eur v gbp
prepočet eur na libry matematiky
- Čo je to poplatok za službu google kreditnou kartou
- Vaša žiadosť uvedená nižšie bola dokončená
- W-9 2021 irs
- S podobnými synonymami
- Existuje ethereum etf
- 175 usd v gbp
On Ubuntu 20.04 - and I have encountered this with (vanilla) GNOME before - with KDE Plasma (no, not Kubuntu!), I am faced with a strange thing that happens every few hours or so and for which I have no explanation or remedy as of yet. Somehow the ecryptfs-encrypted home folder which gets mounted when I log on "disappears" out of the blue.
See full list on howtogeek.com Jun 05, 2020 · eCryptfs is derived from Erez Zadok's Cryptfs, and the FiST framework for stacked filesystems. It is originally authored by Michael Halcrow and IBM Linux Technology Center. Now, it has been maintained by Dustin Kirkland and Tyler Hicks of Canonical, the parent company of Ubuntu. Install eCryptfs on Linux eCryptfs stores cryptographic metadata in the header of each file written, so that encrypted files can be copied between hosts; the file will be decryptable with the proper key, and there is no need to keep track of any additional information aside from what is already in the encrypted file itself. Ubuntu 18.04 LTS and newer Ubuntu versions no longer include an option in the installer to encrypt the home directory.
|
OPCFW_CODE
|
Christian Wüthrich – Associate Professor, University of Geneva. Christian’s philosophical interests most prominently include foundational issues in physics, particularly in classical general relativity and quantum gravity. Of course, he also gets excited about the implications of philosophy of physics for general philosophy of science and metaphysics. More specifically, he enjoys thinking about issues such as space and time, persistence, laws of nature, determinism, and causation.
Karen Crowther is a postdoc working on the Swiss NSF-funded project “New Avenues Beyond Space and Time”, together with Niels Linnemann and Christian Wüthrich (PI). Before this, she was a postdoc at the University of Pittsburgh, and before that she received her PhD from the University of Sydney. Karen’s research has focused on inter-theory relations in physics—in particular, the nature of the relationships between quantum gravity and general relativity. Currently, Karen is interested in exploring the role of scientific principles in theory-change, and the principles of quantum gravity.
Vincent Lam is a scientific collaborator at the University of Geneva and a honorary research fellow at the University of Queensland. His research interests include the philosophy and foundations of physics, the epistemology and metaphysics of science.
Baptiste Le Bihan (postdoc) received his PhD in philosophy from the University of Rennes 1 under the supervision of Pierre Joray and the co-supervision of Jiri Benovsky (University of Fribourg). He works mainly in philosophy of science and metaphysics. In his dissertation, he has studied whether the future can be open, when we operate under the eternalist assumption that the future exists. His researches focus mainly on time, material objects and mereology. He is now working on the possible disappearance of time in quantum gravity in the project Space and Time after Quantum Gravity.
Augustin Baas, PhD in Physics and Master in Philosophy. His thesis project in philosophy “The randomness assumption” is supervised by C. Wüthrich (University of Geneva) and A. Barberousse (Paris-Sorbonne).
Niels Linnemann is a PhD student working on the Swiss NSF-funded project “New Avenues Beyond Space and Time”, together with Karen Crowther and Christian Wuthrich (PI). Before his thesis project on“Emergent Gravity and Its implications For a Theory of Quantum Gravity”, he studied Maths, Physics and Philosophy in Münster, Lund, Oxford and Cambridge. His major interests currently lie in philosophy of spacetime, philosophy and foundations of quantum gravity and their synthesis, i.e. the philosophy of quantum gravity.
Rasmus Jaksland is a PhD fellow in philosophy at the Norwegian University of Science and Technology. He works on a project titled “The Prospects of Naturalized Metaphysics” where he investigate the role of physics in metaphysics, and attempt to assess to what degree metaphysics should be informed by and founded in physics. He is also engaged in research on foundational issues in high energy physics, particularly on the nature of spacetime in the light of the AdS/CFT correspondence and on the metaphysical implications holographic relation between entanglement and spacetime.
Radin Dardashti (former doctoral fellow of the Geneva Symmetry group) is an assistant professor at the Leibniz University Hannover. Radin studied physics in Aachen and Queen Mary University of London and philosophy of science at the London School of Economics. He obtained his Ph.D. from the Ludwig-Maximillians Universität München with a dissertation on novel scientific methodologies in modern fundamental physics. He works mainly in philosophy of science (scientific methodology, theory development and assessment) and philosophy of physics (role of symmetries, no-go theorems, analogue gravity).
|
OPCFW_CODE
|
How do I loop through a multi-column listbox in WPF?
I am a beginner with WPF. I fear this may sound like a silly question.
I have a listbox with two columns. Each listbox item contains a horizontal stacked panel which in turn contains textblocks.
The listbox is empty, with each listbox item being added by the end-user through a couple of textboxes placed elsewhere. The first column accepts strings, and the second column accepts only percentages.
(I have attached a relevant portion of the event sub where a user is adding new rows.)
Private Sub btnAddItem_Click(sender As System.Object, e As System.Windows.RoutedEventArgs) Handles btnAddSplit.Click
'...
Dim ListBoxItemName As New TextBlock
ListBoxItemName.Text = Name.Text
ListBoxItemName.Width = 170
Dim ListBoxItemValue As New TextBlock
ListBoxItemValue.Text = SplitValue.Text
ListBoxItemValue.Width = 70
Dim ListBoxStackPanel As New StackPanel
ListBoxStackPanel.Orientation = Orientation.Horizontal
ListBoxStackPanel.Children.Add(ListBoxItemName)
ListBoxStackPanel.Children.Add(ListBoxItemValue)
Dim NewEntry As New ListBoxItem
NewEntry.Content = ListBoxStackPanel
MyListBox.Items.Add(NewEntry)
'...
End Sub
I would like to be able to check every time the above event is fired that the column of percentages does not exceed 100%. I have a couple of labels below the listbox itself where I would like to show the running total and the remainder.
My questions are:
1) How can I loop through the second column of percentages to show the running total and remainder?
2) Is there something more suitable than a ListBox that could make this easier?
I would appreciate any sort of guidance towards a solution on this, regardless of whether it is in VB or C#. Thank you very much.
Based on your code sample, it appears as if you're going through the same issues I went through when I first started with WPF. It's difficult to let go of that codebehind mentality and to embrace databinding, but I think you should give it a try. You will probably find that it's worth it.
Ultimately, I think your best bet would be to bind some collection of objects to the listbox and then iterate through this collection, like...
<ListBox ItemsSource="{Binding StringPercentageCollection}" ... /> And bind your columns, or your textblocks, or whatever your layout is, to the public properties of "YourStringPercentageObject".
And then "StringPercentageCollection" might be an ObservableCollection (or some other collection) of YourStringPercentageObject in the data context. Then you'd be able to iterate through the observable collection normally.
Thanks for the tips, I will look into this approach instead. Databinding, here I come!
I would keep track of the percentage in another field if possible. This would separate your implementation from your user interface which is generally a good idea.
|
STACK_EXCHANGE
|
document.addEventListener("DOMContentLoaded", function() {
if (!annyang) {
return alert("Lo siento, tu navegador no soporta el reconocimiento de voz :(");
}
const $comandosReconocidos = document.querySelector("#comandosReconocidos"),
$vozDetectada = document.querySelector("#vozDetectada");
const loguearComandoReconocido = contenido => {
$comandosReconocidos.innerHTML += contenido + "<br>";
};
const loguearVozDetectada = contenido => {
$vozDetectada.innerHTML += contenido + "<br>";
};
annyang.setLanguage("es-MX");
let comandos = {
"hola": () => {
loguearComandoReconocido(`Hola mundo!`);
},
"reporte de ventas de *mes": mes => {
if ("enero,febrero,marzo,abril,mayo,junio,julio,agosto,septiembre,octubre,noviembre,diciembre".split(",").indexOf(mes.toLowerCase()) === -1) {
return;
}
loguearComandoReconocido(`Ok te muestro el reporte de ventas de ${mes}`);
},
"enviar correo a *usuario": usuario => {
let usuarioCorregido = usuario.replace(/\ /g, "").replace(/arroba/g, "@").toLowerCase();
loguearComandoReconocido(`Originalmente es ${usuario} pero creo que el correcto es ${usuarioCorregido}`);
},
"mi nombre es *nombre y tengo *anios años": (nombre, anios) => {
loguearComandoReconocido(`Hola ${nombre} es genial que tu edad sea ${anios} :)`);
}
};
annyang.addCommands(comandos);
annyang.addCallback("result", frases => {
loguearVozDetectada(`<strong>Probablemente has dicho: </strong> <br> ${frases.join("<br>")}`);
});
annyang.start();
});
|
STACK_EDU
|
Does creme fraiche contain live bacteria in the same way that yoghurt does?
Is crème fraiche 'live', like yoghurt is? Or is it 'inert?'
Unfortunately, I can only give a similar answer to ones you've received in your question about sour cream -- generally, you'll have to contact the manufacturer.
That said, most commercial cultured dairy products do not undergo a second "pasteurization" step after fermentation, and in a few searches on the topic, I wasn't able to find anything about crème fraîche manufacture that would indicate that pasteurization after culturing is common. Even yogurt that is "heat treated" after fermentation is often not subjected to the same levels of heat and time that would be required for normal pasteurization -- thus, while some (and maybe most) bacteria may be killed off, it's likely that some remain.
I would generally assume that any cultured dairy product (crème fraîche, yogurt, cultured buttermilk, kefir, sour cream, etc.) will contain at least some live bacteria unless the label or manufacturer explicitly tells you otherwise. The question is the concentration. Usually high-fat cultured dairy products (crème fraîche, sour cream) contain a lower concentration of culturing bacteria than lower fat dairy (yogurt, buttermilk), but it's tough to know how low.
Regardless, "full-fat" crème fraîche should generally be relatively stable even if you heat it. So, you could theoretically pasteurize it yourself by heating above 150F (66C) for at least 30 minutes. At higher temperatures, pasteurization will take less time (the standard for commercial pasteurization of high-fat milk products is usually 15 seconds at 166F (74C)), though at some point you'll begin significantly altering the texture of the product at high temperature.
Keep in mind, though, that even "pasteurized" products usually contain small amounts of active lactic-acid bacteria. Pasteurization temperatures and time are designed around killing specific nasty bacteria which are known to cause illness. The kind of bacteria used for culturing dairy products are not generally considered harmful, and many of them can survive longer at high temperatures (hence, the reason why pasteurized milk eventually often "goes sour"). If you're looking for dairy products that are actually sterile (with no live bacteria), you'll need to look at "ultra-high temperature" (UHT) pasteurized products. These are the sort of dairy products that can often be stored on the shelf without refrigeration for months without going bad. Normal "pasteurized" dairy products still should be expected to contain some live bacteria (just a lower concentration), which is why they are refrigerated.
|
STACK_EXCHANGE
|
Embedded Linux on Zynq 7000, dropping almost all UDP packets
I am working with the Xilinx distribution of Linux on a Zynq 7000 board. This has two ARM processors, some L2 cache, a DRAM interface, and a large amount of FPGA fabric. Our appliance collects data being processed by the FPGA and then sends it over the gigabit network to other systems.
One of the services we need to support on this appliance is SNMP, which relies on UDP datagrams, and although SNMP does have TCP support, we cannot force the clients to use that.
What I am finding is that this system is losing almost all SNMP requests.
It is important to note that neither the network nor the CPUs are being overloaded. The data rate isn't particularly high, and the CPUs are usually somewhere around 30% load. Plus, we're using SNMP++ and Agent++ libraries for SNMP, so we have control over those, so it's not a problem with a system daemon breaking. However, if we do stop the processing and network activity, SNMP requests are not lost. SNMP is being handled in its own thread, and we've made sure to keep requests rare and spread-out so that there really should be no more than one request buffered at any one time. With the low CPU load, there should be no problem context-switching to the receiving process to service the request.
Since it's not a CPU or ethernet bandwidth problem, my best guess is that the problem lies in the Linux kernel. Despite the low network load, I'm guessing that there are limited network stack buffers being overfilled, and this is why it's dropping UDP datagrams.
When googling this, I find examples of how to use netstat to report lost packets, but that doesn't seem to work on this system, because there is no "-s" option. How can I monitor these packet drops? How can I diagnose the cause? How can I tune kernel parameters to minimize this loss?
Thanks!
It's possible that the SNMP responses are being lost on the requesting end, but that's just a plain old x86 machine running Linux with lots of CPU and memory.
It'd be great to bisect the problem and see if you can determine where the packets are getting to and then lost. You can use a tool like wireshark to make sure the requests are getting to the Zynq board. I'd recommend tcpdump then to see if the UDP packets are available in the kernel. You can install other linux utils through the board's flash. Also, UDP doesn't have guaranteed delivery (I'm not sure if SNMP has its own retry logic ontop).
Ok, I've personally never used wireshark, but my colleagues have, so I'll work with them on that. As for SNMP, what it has is a timeout. In my case, I'm trying to change a state variable and finding it not getting set, so I spawn a thread that retries 10 times. Frequently all 10 attempts are lost, which is distressing.
Wireshark or tcpdump is a good approach.
You may want to take a look at the settings in /proc/sys/net/ipv4/ or try an older kernel (3.x instead of 4.x). We had an issue with tcp connections on the Zynq with the 4.4 kernel but this could be seen in the system logs (A warning regarding SYN cookies and possible flooding).
|
STACK_EXCHANGE
|
Method Error
I am new to Julia and am trying to use this code, but I keep getting the following error when I try to run RECFAST in all of the example code files. Please advise.
`MethodError: no method matching Bolt.RECFAST(::Background{ForwardDiff.Dual{ForwardDiff.Tag{typeof(fc), Float64}, Float64, 1}, Interpolations.ScaledInterpolation{ForwardDiff.Dual{ForwardDiff.Tag{typeof(fc), Float64}, Float64, 1}, 1, Interpolations.BSplineInterpolation{ForwardDiff.Dual{ForwardDiff.Tag{typeof(fc), Float64}, Float64, 1}, 1, OffsetArrays.OffsetVector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(fc), Float64}, Float64, 1}, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(fc), Float64}, Float64, 1}}}, Interpolations.BSpline{Interpolations.Cubic{Interpolations.Line{Interpolations.OnGrid}}}, Tuple{Base.OneTo{Int64}}}, Interpolations.BSpline{Interpolations.Cubic{Interpolations.Line{Interpolations.OnGrid}}}, Tuple{StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float64, ::Float6...
@ Bolt ~/.julia/packages/Parameters/MK0O4/src/Parameters.jl:545
[2] pkc(Ω_c::ForwardDiff.Dual{ForwardDiff.Tag{typeof(fc), Float64}, Float64, 1}, k_grid::Vector{Float64})
@ Main ./In[7]:4
[3] fc(Ω_c::ForwardDiff.Dual{ForwardDiff.Tag{typeof(fc), Float64}, Float64, 1})
@ Main ./In[8]:7
[4] derivative(f::typeof(fc), x::Float64)
@ ForwardDiff ~/.julia/packages/ForwardDiff/vXysl/src/derivative.jl:14
[5] top-level scope
@ In[12]:1`
Can you post the code that produced this error?
When I run the basic_usage.jl example things look fine for me? Are you using the latest version of main?
|
GITHUB_ARCHIVE
|
June 30 - I released a new version of Tux Paint, and also discovered the earlier releases got a good review at this Spanish website about Linux!
June 25 - I just returned from a week-long road trip to visit Melissa's parents in Seattle (about 1800 miles there and back). In that time, I had no access to e-mail. I've collected over 1200 new messages, over 400 of which were automagically tagged as spam. YUCK!
June 19 - I've been noticing something I'd have never expected. My old, vector-driven, X-Window-based 3D Pong game which I wrote over 4 years ago, is one of the most popular things on my software site... I suppose it's time for a new version!
June 18 - Tux Paint is now available for Windows. (I haven't tested it yet, but hopefully it should work...)
June 18 - I posed a question to Slashdot about art in GPL software. They accepted. Please join the conversation!
June 16 - Last night at about 3:30am I released an alpha version of a program I had been working on the previous two afternoons: Tux Paint, a simple and fun drawing program for little kids. Happy Father's Day!
June 11 - Virtual me got reviewed at GamesForLinux.de.
June 5 - At the request of members of LUGOD, I've donated my collection of Microsoft news to their site. I spruced it up quite a bit, too. Check it out!
June 4 - Feh - Just because I made a virtual version of myself, I'm somehow now an evil genius extraordinaire. I should make a virtual Wil Wheaton. Now that would be evil... err.. I mean genius!
June 3 - When someone suggests something as silly as porting myself to a PDA, is it that bizarre that I actually do it???
June 2 - You can now get a New Breed Software Golf Shirt!
June 2 - I've also updated my resume, since it was getting crufty.
June 2 - Finally, two years later, I change my photo on my website. Ironically, the photo is from over two years ago. (Artsy/techno. photo I took of myself in front of a TV. I GIMPed it this evening to look like it's on a TV, and threw it up here.)
June 1 - Looks to be a busy month for me. Visiting Washington state in mid-June. Preparing for, and doing, a demonstration of the Zaurus PDA for the Davis PC Users Group, and preparing for a demonstration of The GIMP graphics tool at July's Web_SIG group in Sacramento.
|
OPCFW_CODE
|
11 Must Know Tips for excelling in Python Programming
Python Coding Essential Tips
Understanding the nuances of any coding language is not easy. It comes with lots and lots of experience. Here are the 10 must know python coding tips for you to know if you are starting out new or if you are preparing for your upcoming Python interviews.
It is extremely important to know the python versions. There has been a major debate on which version to use. Some say python 2 is better while others say python 3 is better. The final version of python 2, python 2.7 was
released in 2010 while python 3.0 came out in 2008. The latest release of python 3, python 3.6 was released in 2016. There are several packages like six that allow you to easily migrate from python 2 to 3 with very few
manipulations. Since all the new standard library improvements will now be available by default on python 3 only, it is better to use python 3 if you can do exactly what you want with python 3. Although python 3 is a consistent
language but it has some downsides too. It is very limited to third party modules and most major frameworks still run on python 2.
Following are the most used python libraries that you must know of before starting coding in python:
Pandas: pandas is an open source python library that provides easy to use data structures and data analysis tools. It also provides tools for loading data form different file formats and merging and joining of data. Pandas provides us the facility to aggregate and transform data using group by.
pip install pandas
import pandas as pd
Numpy: Numpy or numeric python is used for scientific computation in python. It provides tools for working with multidimensional array. Numpy also provides ways for indexing in arrays.
import numpy as np
BeautifulSoup: BeautifulSoup is a python library that is helpful in scraping data from HTML and XML files. It automatically converts incoming documents to Unicode and outgoing documents to UTF-8.
from bs4 import BeautifulSoup
for val in soup.find_all(‘a’):
matplotlib: matplotlib is a python library that is used to plot interactive graphs. Data visualization is necessary to identify useful pattern in data.
import matplotlib.pyplot asplt
For example –
b=list(map(lambda x: x*5, a))
For example –
a=[“using”, “enumerate”, “in”, “python”]
for counter, val in enumerate(a):
For example –
File_location=input(“Enter the file location”)
a=int(input(“Enter the value of a”))
for val in set(my_list):
1 : 4
2 : 3
3 : 2
4 : 1
for val in my_list:
for i in val:
new_list.append(i) //this is how you convert a list of list
for i in new_list:
start is the start index of the list
end is the last index of the list
8. Sorting- Sorting is a built-in function in python that allows you to arrange elements in an ordered manner. Python allows you to sort the list according to any element in the list.
my_list = [3, 6, 8, 2, 78, 1, 23, 45, 9]
[1, 2, 3, 6, 8, 9, 23, 45, 78]
a.sort(key=lambda x: x, reverse=True)
Python programming also allows you to use arithmetic functions in the following way:
res = [i*j for i,j in zip(x,y)]
Hope these tips will be helpful to you. Do you have any similar tips that you want to share? Please post and let us know.
|
OPCFW_CODE
|
startJVM throws EXCEPTION_ACCESS_VIOLATION
Hello,
I am currently investigating an error on windows that came up for some of our users https://github.com/bayer-science-for-a-better-life/paquo/issues/67 And I was wondering if this a known issue for jpype.
Basically startJVM throws the following error and crashes the python interpreter:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0000000000000000, pid=9324, tid=14620
#
# JRE version: (16.0.2+7) (build )
# Java VM: OpenJDK 64-Bit Server VM (16.0.2+7, mixed mode, tiered, compressed oops, compressed class ptrs, g1 gc, windows-amd64)
# Problematic frame:
# C 0x0000000000000000
#
# No core dump will be written. Minidumps are not enabled by default on client versions of Windows
#
# An error report file with more information is saved as:
# C:\Users\mail\Development\paquo\hs_err_pid9324.log
Here's a minimal script to reproduce the error:
(most of it is just to download the tool which we wrap using jpype together with its jvm)
import os.path
import platform
assert platform.system() == "Windows", "this code is meant for windows"
def download_qupath():
from urllib.request import urlopen
from urllib.parse import urlsplit
from shutil import unpack_archive
qp_url = "https://github.com/qupath/qupath/releases/download/v0.3.1/QuPath-0.3.1-Windows.zip"
chunk_size = 10 * 1024 * 1024
fn = os.path.basename(urlsplit(qp_url).path)
with open(fn, mode="wb+") as tmp:
with urlopen(qp_url) as f:
print("downloading", qp_url)
for chunk in iter(lambda: f.read(chunk_size), b""):
print(".", end="", flush=True)
tmp.write(chunk)
print("OK")
print("unpacking")
unpack_archive(fn)
QP_DIR = "QuPath-0.3.1"
if not os.path.isdir(QP_DIR):
download_qupath()
print("setup done")
# Code Producing the Error
# ========================
import jpype
jvm_path = os.path.join(os.path.abspath(QP_DIR), "runtime", "bin", "server", "jvm.dll")
try:
jpype.startJVM(jvm_path) # crashes with EXCEPTION_ACCESS_VIOLATION
finally:
print("... never reached ...")
I have tested this locally with python3.9:
(vevn39) C:\Users\mail\Development\paquo>python --version --version
Python 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)]
But users have reported the same issue on windows with python3.8
Any insight would be very much appreciated.
Cheers,
Andreas 😃
This sounds very familiar to me, from this issue in my repo: MPh-py/MPh#49.
The jvm.dll depends on a number of external DLLs. If those aren't unique on the system, it may end up loading the wrong one. In your case, it seems to outright crash. I suspect the problem occurs when it tries to load either vcruntime140.dll or vcruntime140_1.dll, which ship with that vendored Java run-time of yours, but are commonly found elsewhere on a Windows system as well.
It helps to put the bin folder that they're in first on the search path:
import jpype
import os
QP_DIR = "QuPath-0.3.1"
bin = os.path.join(os.path.abspath(QP_DIR), "runtime", "bin")
os.environ["PATH"] = bin + os.pathsep + os.environ["PATH"]
jvm = os.path.join(bin, "server", "jvm.dll")
jpype.startJVM(jvm)
Thank you so much @john-hen
Interestingly your suggested fix solves the issue for my python3.9 installation from python.org,
but it does not work for my python3.9 installation from the Microsoft Store. I'll try to investigate a bit further.
It's definitely a bit confusing:
microsoft store python3.9
C:\Users\mail\Development\paquo>python --version --version
Python 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)]
C:\Users\mail\Development\paquo>python -c "import sys; print(sys.executable)"
C:\Users\mail\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe
python.org python3.9
C:\Users\mail\Development\paquo>C:\Python39\python --version --version
Python 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)]
C:\Users\mail\Development\paquo>C:\Python39\python -c "import sys; print(sys.executable)"
C:\Python39\python.exe
Yeah, that is confusing. I never use the Microsoft Store installation, only the python.org one. And I'd guess that's true for most of my Windows users as well.
But as Python 3.9 ships with vcruntime140.dll and vcruntime140_1.dll, you may want to check if those binaries are somehow different between the two distributions. There's a chance that if Python loads those libraries, then the Java VM, running in the same process, will just keep using whichever one was already loaded, so the search-path manipulation will have no effect (as far as these two DLLs are concerned.) And maybe (pure speculation) one version is incompatible with the Java RE build, the other one isn't.
As you can see from the issue I linked, not all Java RE builds depend on these specific DLLs. So if it's up to you to vendor in a different one, without that dependency, that might solve your problem. (I'm really no expert on this, so, again, pure speculation.)
So to summarize my findings:
@john-hen's suggested fix worked for all python.org Python installations on Windows 10.
It did not work for MicrosoftStore installed Pythons.
I couldn't work out the specific difference between the Microsoft Store Python versions (I tested py37, py38 and py39) and the python.org Pythons without investing more time, so I opted for throwing a warning in case a user is running a MicrosoftStore Python. Here's the warning code:
https://github.com/bayer-science-for-a-better-life/paquo/blob/d8314c86da16d657f0f457cb93f902fa7d2dbdcd/paquo/jpype_backend.py#L187-L198
Moreover, I tried adding the runtime/bin as a jvm option via -Djava.library.path=... but that did not work.
To minimize the changes to the runtime environment, I am currently using a context manager around jpype.startJVM to only change the "PATH" environment variable for starting the jvm, and that works for my usecase.
Thanks again for the help 😃
Cheers,
Andreas 😃
|
GITHUB_ARCHIVE
|
DeployableByOLM" err="" result=ERROR for open-access ibm-mq-operator 1.3.1
Bug Description
There is an unclear error produced by DeployableByOLM on an open-access IBM operator:
time="2022-11-28T21:14:20Z" level=debug msg="running check: DeployableByOLM"
time="2022-11-28T21:14:23Z" level=info msg="check completed: DeployableByOLM" err="<nil>" result=ERROR
Could we have a bit more logs in that situation?
Version and Command Invocation
preflight version: the same error on pr843 and 1.4.2
podman run --rm --privileged --security-opt=label=disable
-e PFLT_JUNIT=true
-e PFLT_ARTIFACTS=/artifacts
-e PFLT_LOGFILE=/artifacts/preflight.log
-e PFLT_LOGLEVEL=trace
-e DOCKER_CONFIG=/opt
-e PFLT_DOCKERCONFIG=/opt/config.json
-e KUBECONFIG=/kubeconfig
-e PFLT_INDEXIMAGE=registry.dfwt5g.lab:4443/telcoci/preflight/disconnected-catalog@sha256:794292f20a050c6f628aafc0959de871c8527d23d2abaf9443fe3f0ce2b9765e -e PFLT_NAMESPACE=preflight-testing
-e PFLT_SERVICEACCOUNT=default
-e PFLT_SCORECARD_IMAGE=quay.io/operator-framework/scorecard-test@sha256:f9bb5c28e4c2b0aecab97f5f17efa5a39e310a9e2b9f6e2127f81a08c2f57749
-v /tmp/preflight_tmp_dir.gv4tzvvl/kubeconfig:/kubeconfig
-v /tmp/preflight_operator_artifacts.brormnsy:/artifacts
-v /tmp/preflight_tmp_dir.gv4tzvvl/config.json:/opt/config.json
registry.dfwt5g.lab:4443/preflight/preflight:pr843 check operator
docker.io/ibmcom/ibm-mq-operator-bundle@sha256:723e3abcf5f8e2eea1458289c35b67cdcef92c3c256d3f384d6e4275302f0a89
Expected Result
Get a bit more logs to understand why DeployableByOLM threw an error
Actual Result
DeployableByOLM" err="<nil>" result=ERROR
Adding more details because it seems to be a regression introduced between preflight 1.0.8 and preflight 1.1.0.
Here I used another open-access operator for the tests, it passed DeployableByOlm on preflight 1.0.8 and failed DeployableByOlm starting from preflight 1.1.0:
quay.io/rh-nfv-int/testpmd-operator:v0.2.9
index image: quay.io/rh-nfv-int/nfv-example-cnf-catalog:v0.2.9
Logs on 1.0.8 (DeployableByOlm passed):
time="2022-04-25T04:52:53Z" level=info msg="certification library version 1.0.8 <commit: f6eb6893c33c39f5a9ba9e46c0c217f3079dd3ac>"
-- snip --
time="2022-04-25T04:53:04Z" level=info msg="running check: DeployableByOLM"
time="2022-04-25T04:53:07Z" level=trace msg="reading annotations file from the bundle"
time="2022-04-25T04:53:07Z" level=debug msg="mounted directory is /tmp/preflight-653131518/fs"
time="2022-04-25T04:53:07Z" level=trace msg="searching for key
-- snip --
time="2022-04-25T04:54:26Z" level=info msg="check completed: DeployableByOLM" result=PASSED
Logs on 1.1.0 (DeployableByOlm failed):
time="2022-04-26T02:29:18Z" level=info msg="certification library version 1.1.0+500bf9eb354cbf9bf7bf0d78eae4b12c6469396a <commit: 500bf9eb354cbf9bf7bf0d78eae4b12c6469396a>"
-- snip --
time="2022-04-26T02:29:29Z" level=debug msg="running check: DeployableByOLM"
time="2022-04-26T02:29:29Z" level=trace msg="reading annotations file from the bundle"
time="2022-04-26T02:29:29Z" level=debug msg="mounted directory is /tmp/preflight-287613090/fs"
time="2022-04-26T02:29:29Z" level=debug msg="Command being run: [operator-sdk bundle validate -b none --output json-alpha1 --select-optional name=community --select-optional name=operatorhub --verbose /tmp/preflight-287613090/fs]"
time="2022-04-26T02:29:29Z" level=info msg="check completed: DeployableByOLM" result=FAILED
Logs on 1.4.2 (DeployableByOlm failed):
time="2022-11-29T07:42:08Z" level=info msg="certification library version 1.4.2 <commit: f9cff772837132149df69f8ae251d3caf81c49ac>"
-- snip --
time="2022-11-29T07:42:18Z" level=debug msg="running check: DeployableByOLM"
time="2022-11-29T07:42:21Z" level=trace msg="reading annotations file from the bundle"
time="2022-11-29T07:42:21Z" level=debug msg="image extraction directory is /tmp/preflight-2534365746/fs"
time="2022-11-29T07:42:21Z" level=info msg="check completed: DeployableByOLM" err="<nil>" result=ERROR
@tkrishtop thanks for reporting this!
The operator image you gave me there is the operator application container itself, but quay.io/rh-nfv-int/testpmd-operator-bundle:v0.2.9 is the bundle we're testing against for that operator. I was able to replicate the issue here with that bundle on preflight built from the main branch.
The core of the issue is that the bundle is malformed. The ValidateOperatorBundle is not included in your snippets, but I would guess that it is also failing in your execution, indicating that the service account in the manifests/ directory matches the service account value found in your cluster service version - and that this is invalid.
To test, I rebuilt the bundle with that service account manifest specifying a different service account and all tests passed without a problem. For reference, I've temporarily published that bundle here quay.io/komish/rh-nfv-int_testpmd-operator-bundle:v0.2.9. I'll probably clean it up from my Quay namespace at some arbitrary time in the future.
The ScorecardOlmSuiteCheck check throws an error as well for the same reason. I do find it odd that this is being listed as an error, and will look into why - but if I had to guess, it's because the scorecard results themselves report an error instead of a failure. I'll check our logic that causes a scorecard check value to get reported as an erroring check vs. a failing check, but there may not be much we can do about this.
With that said, DeployableByOLM still fails with, seemingly, no error. I'll look into the reasons why this is happening and see what I can do to fix it soon.
All of this relates to the testpmd-operator-bundle - but you initially reported this issue with another bundle that's not accessible to me. It's entirely possible that the case there may be different, but have a look and see if this may be the cause. If not, let us know and I'm happy to troubleshoot further - but we may need to work with you get a better look at the bundle in question.
Let us know.
Hi @komish, thank you for having a look!
Just a quick update before heading into the weekend:
you initially reported this issue with another bundle that's not accessible to me
The IBM operator is also an open-access one:
bundle_image: "docker.io/ibmcom/ibm-mq-operator-bundle@sha256:723e3abcf5f8e2eea1458289c35b67cdcef92c3c256d3f384d6e4275302f0a89" # v1.3.1
index_image: "docker.io/ibmcom/ibm-mq-operator-catalog:v1.3.1"
This bundle was also failing validation, and deployable performs a validation as a preventative measure, but logs it as an error because it's unexpected. I've submitted a PR to fix the issue with the nil errors when the DeployableByOLM check passes, which should help provide more information here.
It is still an independent check, but because we perform a validation within DeployableByOLM to reduce the likelihood of a failure in deployment that is similar to Bundle Validation, a failure in bundle validation will probably cause a failure in DeployableByOLM.
In other words, if the bundle does not validate, deployable will probably fail very early.
|
GITHUB_ARCHIVE
|
Why are Hadoop and Spark not in the official Ubuntu repositories?
UPDATE (2021-11-13 22:12 GMT+8): regarding the Snap packages, @karel suggested that this question is a duplicate of Why don't the Ubuntu repositories have the latest versions of software? I disagree, because (1) Snaps, being self-confined and bundled with all its dependencies, are different from deb packages and I would expect the former to follow upstream more closely, and (2) even if not, I would expect them to be in stable by now.
I see this has already been asked in Hadoop & Spark - why no Ubuntu packages? , but (1) that was back in 2015 and the computing landscape has changed a lot since then, and (2) the only response to that other question does not really answer it, so I thought it would be appropriate to ask again.
So now in 2021 cloud computing and big data has only become more ubiquitous compared to 2015. Considering that one of the major use cases of Linux is in cloud computing / big data, why is the de-facto way of setting up Hadoop and Spark (key frameworks related to big data processing) still downloading and unpacking archives from upstream, instead of simply fetching the appropriate binary packages from the official Ubuntu repositories by running an appropriate apt install command? Unless I'm missing something, I imagine that having such commonly-used frameworks prepackaged for Ubuntu would bring a number of tangible benefits to a vast user base, such as (but not limited to):
Improved integration with the host system
Less manual setup and configuration required
P.S. I've also checked the Snap store considering Canonical's push towards snaps in recent years, and while they appear to be packaged (Hadoop, Spark), the last efforts were back in 2017 and they are only available in the unstable beta / edge channels.
Does this answer your question? Why don't the Ubuntu repositories have the latest versions of software?
No, because Hadoop and Spark do not seem to be in the official Ubuntu repositories at all (I could not find anything relevant with apt-cache search)
The hadoop and spark snap packages haven't been updated since 2017 either. That's what makes this question either a duplicate question or opinion-based.
But then (1) I'd expect Snap packages to follow upstream more closely, and (2) even if not, it should already be in stable by now
I would expect the same thing too as both snap packages are maintained by the same person, but it didn't happen.
Both Hadoop and Spark were dropped from Debian years ago, mostly due to a lack of volunteer interest in maintaining those packages. Ubuntu gets most of its deb packages from Debian, so they were dropped from Ubuntu, too.
Hadoop: Debian tracker page - Debian Bug #630820
Spark: Debian tracker page - Debian Bug #946336
Any community volunteer willing to learn the process and contribute the effort can re-introduce the packages to Debian, and they will subsequently flow into future releases of Ubuntu. More volunteers = More, better, and up-to-date software.
Also, according to https://wiki.debian.org/Hadoop, the Hadoop developers didn't make deb packaging and maintaining easy for the Debian volunteers:
There are a number of reasons for this; in particular the Hadoop build process will load various dependencies via Maven instead of using distribution-supplied packages. Java projects like this are unfortunately not easy to package because of interdependencies; and unfortunately the Hadoop stack is full of odd dependencies
If this information is stale or incorrect, once again it's up to community volunteers to step up, make corrections, and implement changes. Debian and Ubuntu are driven by volunteers. More volunteers = Better documentation.
Thank you, this was the detailed explanation I was looking for. It's a shame that the Hadoop developers did not make it easy to package for distributions such as Debian (and Ubuntu). Maybe I should consider contributing sometime :-)
|
STACK_EXCHANGE
|