Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Welcome to the Irssi Notifier Development Site The idea is the same(regarding the first post link I provided), remote visual notification of private messages, messages sent to you and messages where you're addressed. The only significant diference is that is a real time notification! Ok, ok, it will only pop after 0.5 second. To make this possible you need the irssi proxy to be configured(find out how here, the libnotify library and it's python bindings, the python irclib library, python and optionally, if you're behind a restrictive proxy like for example squid, an ssh client capable of port forwarding, more on this later on. First, a screenshot of what this baby does:Kubuntu, my laptop's distro, but they should work for any Debian based distro.Let's install the needed dependencies: apt-get install libnotify1 python-notify python-gtk2 python-setuptools python-irclib notification-daemon Once the above is finished, you can use setuptools easy_install tool to install Irssi Notifier, so let's install it: The above command will install Irssi Notifier into python's site-packages and the irssi-notifier binary into /usr/bin/. Now, until you get your configuration settings right and Irssi Notifier does not throw you an error do, in case you've setup irssi proxy to listen on domain example.tld, port 55555 with the the proxy password Bar for the nick Foo: irssi-notifier -P example.tld:55555:Foo -p BAR If the above did not throw error(it shouldn't) then you can write the configuration to file(defaults to ~/.irssinotification): irssi-notifier -P example.tld:55555:Foo -p BAR -W Now, next time you want to launch Irssi Notifier, you just have to issue: Now, some explanation to what gets written to the configuration file plus some other available configuration options : [main] passwd = BAR proxies = example.tld:55555:FOO timeout = 5 friends = foobar barfoo - nick - The common nick used in all defined proxies</li> - passwd - The irssi proxy password</li> - proxies - A space seperated list of irssi proxies to listen to composed of <address>:<port> or <address>:<port>:<nick>. If you choose the first way to define the proxy you must also provide the nick configuration variable, that's the way we know messages are addressed to you. - timeout - Timeout in seconds for the pop-up notification to go away. - friends - A space seperated list of your friends nicks to be notified when they join, part, quit or change nicks. That's about what's needed to run Irssi Notifier. Connect-Proxy is basically an OpenSSH extension to enable ssh connections through a proxy server. There's info on how to use the connect-proxy on it's own page, basically, in my specific case, I need to connect to the http proxy and authentication is not required, so I changed my ~/.ssh/config and added: ProxyCommand connect-proxy -H <proxy_host>:<proxy_port> %h %p Now, for every ssh connection that command is executed, and that's how I'm able to ssh trough the http proxy. And for irssi notifier, basicly, I establish a tunnel for each irssi proxy connection defined, for example: ssh -L 55555:localhost:55555 firstname.lastname@example.org Back to your own usage, now you should instead point Irssi Notifier to localhost instead of the remote host. irssi-notifier -P localhost:55555:Foo -p BAR You might also find some aditional info on Irssi Notifier's blog. Please help up translate IrssiNotifier, find out how. - TracGuide -- Built-in Documentation - mp3 download -- Trac Open Source Project - Trac FAQ -- Frequently Asked Questions - TracSupport -- Trac Support For a complete list of local wiki pages, see TitleIndex.
OPCFW_CODE
Microsoft Virtual Academy The Microsoft Virtual Academy is a website for IT Pros and Developers offering free self-paced interactive training resources led by experts. Follow us on Twitter @MSVirtAcademy MVA Jump Starts SQL Server 2012 System Center 2012 Windows Server "8" Windows Server 2012 Browse by Tags Microsoft Virtual Academy Tagged Content List Tune Your Site (& Apps) to Maximize Search Jump Start Bing it on! Learn best practices for how to build Web sites that are easy to discover, including guidance on architecture, page load time, getting your site indexed, and the resources available within the Bing Toolbox, including APIs and SDK documentation. During this two-hour session, Microsoft Senior... 21 May 2013 New Virtual Labs for System Center SP1 2012: Start with Configuration and Infrastructure and go to design a Multi-Tier Service Template Overview For a growing number of businesses, the journey to cloud computing starts with private cloud implementation. A Microsoft private cloud dramatically changes the way your business produces and consumes IT services by creating a layer of abstraction over your pooled IT resources, enabling your... 13 May 2013 Lync 2013 Jump Start! Accelerated Cert Prep for Exam 70-336! Two of the most "edutaining" Jump Starts we've ever produced were last year's courses on Lync Server 2010. This year, we have the same instructor teams back to teach IT Pros and telecommunications experts how to take advantage of Lync Server 2013 . Join Unified Communications Architect Brian Ricks and... 10 May 2013 Extend Infrastructure! Cool "Azure for IT Pros" Jump Start coming May 16! Register Now! This fast-paced MVA Jump Start is designed to help IT departments understand why it's time to start the process of extending infrastructure to the cloud. Microsoft Technical Evangelist David Tesar and Azure Group Technical Product Manager David Aiken will leverage a demo-rich and engaging team-teaching... 29 Apr 2013 MVA Live Q&A on 4/3/2013. Opportunity to ask your virtualization questions! On April 3, MVA will launch the first in a monthly series of free, live online Q&A sessions led by members of our Microsoft technical evangelist team. Space is limited. Register now . Due to the popularity of the three courses we recently published to MVA covering Microsoft’s approach to... 18 Mar 2013 New Live Jump Start: MS Tools for VMware Integration & Migration Michael Aldridge MVA Site Manager REGISTER TODAY: LIVE EVENT March 14, 2013 8:00am-12:00pm PST This live interactive course is designed for IT professionals who need to manage, monitor and automate VMware in their datacenter using System Center 2012 SP1. Get interactive live chats and Q&A with the experts. You will learn from... 26 Feb 2013 Page 1 of 1 (6 items) © 2013 Microsoft Corporation. Privacy & Cookies
OPCFW_CODE
// // main.swift // CEF.swift // // Created by Tamas Lustyik on 2015. 07. 18.. // Copyright © 2015. Tamas Lustyik. All rights reserved. // import Cocoa import CEFswift class SimpleApp: CEFApp, CEFBrowserProcessHandler { let client: SimpleHandler init() { client = SimpleHandler.instance } // cefapp var browserProcessHandler: CEFBrowserProcessHandler? { return self } // cefbrowserprocesshandler func onContextInitialized() { let winInfo = CEFWindowInfo() let settings = CEFBrowserSettings() let cmdLine = CEFCommandLine.globalCommandLine var url = NSURL(string: "http://www.google.com")! if let urlSwitch = cmdLine?.valueForSwitch("url") where !urlSwitch.isEmpty { url = NSURL(string: urlSwitch)! } CEFBrowserHost.createBrowser(winInfo, client: client, url: url, settings: settings, requestContext: nil) } } class SimpleHandler: CEFClient, CEFLifeSpanHandler { static var instance = SimpleHandler() private var _browserList = [CEFBrowser]() private var _isClosing: Bool = false var isClosing: Bool { get { return _isClosing } } // from CEFClient var lifeSpanHandler: CEFLifeSpanHandler? { return self } // from CEFLifeSpanHandler func onAfterCreated(browser: CEFBrowser) { _browserList.append(browser) } func doClose(browser: CEFBrowser) -> Bool { if _browserList.count == 1 { _isClosing = true } return false } func onBeforeClose(browser: CEFBrowser) { for (index, value) in _browserList.enumerate() { if value.isSameAs(browser) { _browserList.removeAtIndex(index) break } } if _browserList.isEmpty { CEFProcessUtils.quitMessageLoop() } } // new methods func closeAllBrowsers(force: Bool) { _browserList.forEach { browser in browser.host?.closeBrowser(force: force) } } } class SimpleApplication : NSApplication, CefAppProtocol { var _isHandlingSendEvent: Bool = false func isHandlingSendEvent() -> Bool { return _isHandlingSendEvent } func setHandlingSendEvent(handlingSendEvent: Bool) { _isHandlingSendEvent = handlingSendEvent } override init() { super.init() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func sendEvent(event: NSEvent) { let stashedIsHandlingSendEvent = _isHandlingSendEvent _isHandlingSendEvent = true defer { _isHandlingSendEvent = stashedIsHandlingSendEvent } super.sendEvent(event) } override func terminate(sender: AnyObject?) { let delegate = NSApplication.sharedApplication().delegate as! AppDelegate delegate.tryToTerminateApplication(self) } } let args = CEFMainArgs(arguments: Process.arguments) let app = SimpleApp() SimpleApplication.sharedApplication() let settings = CEFSettings() CEFProcessUtils.initializeMainWithArgs(args, settings: settings, app: app) let appDelegate = AppDelegate() appDelegate.createApplication() CEFProcessUtils.runMessageLoop() CEFProcessUtils.shutDown() exit(0)
STACK_EDU
Remove actions from the mapping actions should defined set in a different way, and it will be always specified by the application as it will contains the list of actions that the app will be listen for. mappings will contain mappings between hardware events and the actions defined by the app. These mappings could be specified initially by the app as a default mapping, but they could eventually be generated by the user manually or using a common UI for a-frame (As steam does), or by fetching configurations remotely. Right now we have: var mappings = { actions: { changeTask: { label: 'Change task' }, testlog: { label: 'Test Log' }, testlog_task1: { label: 'Test Log Task 1' } }, mappings: { default: { common: { triggerdown: 'testlog' }, 'vive-controls': { gripdown: 'changeTask', trackpaddown: 'testlog' }, 'oculus-touch-controls': { abuttondown: 'changeTask' }, 'windows-motion-controls': { gripdown: 'changeTask' }, keyboard: { 't_up': 'testlog', 'c_up': 'changeTask' } }, ... And we call: AFRAME.registerInputMappings(mappings); The proposed change will be to take actions out from these mappings and create another function to set them: AFRAME.registerInputActions(actions); This looks good to me. Actions exist in specific states, so the actions should be in states. Also states might want to have metadata also (imagining what the config UI might look like) so we might want to put actions in an object: AFRAME.registerInputActions({ default: { label: "Default State", actions: { changeTask: { label: 'Change task' }, testlog: { label: 'Test Log' }, } }, task1: { label: "Painting", actions: { testlog_task1: { label: 'Test Log Task 1' } } } }); With a corisponding mapping: AFRAME.registerInputMappings({ default: { 'vive-controls': { gripdown: 'changeTask', trackpaddown: 'testlog' }, 'oculus-touch-controls': { abuttondown: 'changeTask' triggerdown: 'testlog' }, 'windows-motion-controls': { gripdown: 'changeTask' triggerdown: 'testlog' }, keyboard: { 't_up': 'testlog', 'c_up': 'changeTask' } }, task1: { 'vive-controls': { triggerdown: 'testlog_task1', gripdown: 'changeTask' }, 'oculus-touch-controls': { triggerdown: 'testlog_task1', abuttondown: 'changeTask' }, 'windows-motion-controls': { triggerdown: 'testlog_task1', gripdown: 'changeTask' }, keyboard: { 'y_up': 'testlog_task1', 'c_up': 'changeTask' } } }) @netpro2k yep, I just forgot to include them on the example. I created a PR with the proposals https://github.com/fernandojsg/aframe-input-mapping-component/pull/17
GITHUB_ARCHIVE
As third-graders and I programmed an address garland In hackspace, we do a lot of work with schoolchildren, but mostly these are advanced high school students. It’s quite simple with them; they are, in general, almost like students. Last year, I wanted to expand my competence, to try something more difficult. Under the cut will be a story about how I made friends with an address RGB garland with arduino, arduino with Scratch, and Scratch with junior high school students. In the previous series, I already described a little school, where I mainly teach. After strong high school students, I first decided to take on the eighth (now already ninth) grade, but it was surprisingly completely nonexistent, it is not often possible to meet schoolchildren who are so uninterested in anything. After the ninth grade comes fifth, as the eldest are very strong. However, the fifth one is another story, it’s almost impossible to teach how to write code, you need a graphical programming environment. Actually, this is where the fun begins. Addressable RGB LED garland 50PCS WS2801 : For those who have not encountered: each light in it is three LEDs and a chip that receives commands via the SPI protocol and lights up the corresponding LEDs. Scratch graphical programming environment: Probably for the good all this had to be done on ArduBlock, but at that moment it seemed to me that Scratch was much faster. Fifth grade - 14 people (I will not post the photo). The easiest way was to connect arduin to a garland, there is a special library: github.com/adafruit/Adafruit-WS2801-Library Next, you need to write a script that will process the commands on the serial port from the computer and light the garland. Here a problem arose: when giving a sufficiently long string, the garland did not want to light up in the right color for anything. I managed to overcome this with some kind of shamanism, but I did not understand what the problem was (I will not post the code, because I myself do not understand how it works). Next, I needed to make friends a serial port with Scratch. Scratch can write to a socket using the UDP protocol (among the graphic blocks there is a “broadcast” block to which a string can be sent), and this is the only way to get a text string from it. I decided to write an adapter in python that would read lines from the corresponding port, save it to the buffer, and send it to the serial port at intervals. Good libraries were found for this: github.com/pilliq/scratchpy - for interacting with scratch, the serial library (out of the box) - for the serial port. So it turned out a bit buggy, but in general a working graphical environment for controlling the garland: Next, you need to explain to the children how to use it. I don’t like to explain anything at all, but I like it when the children read the training manual and do everything themselves. Actually, I wrote such a manual : docs.google.com/document/d/1XdhVsTQvrjNnEUoqMiRrYoqEJPDo2HTAs6YYHc0dJag/edit Essentially they learn the following: - Working with the graphical environment: how to drag and drop blocks, which block is responsible for what. - How to set numbers in RGB format with numbers. - Work with variables - Work with arrays (to light a garland, we send her an array, each element of which is a color by which a light should light up) What ultimately happened At first, complete delight, two lessons the children did not stop looking, even girls, only exclamations came: “Oh, she caught fire!” However, now that we’ve been busy for about eight hours, interest is disappearing, aimless lighting of bulbs is not so interesting. To our good fortune, an applied task appeared - to make scenery for a school theater production: a bonfire, starry sky, an iridescent ruby. Now we are doing this, it seems interesting. The main problem that I am facing is that the children remember very poorly what they did in the last lesson, each time I have to leaf through the instruction manual a few pages back and repeat everything. I still don’t understand how to deal with this. You can, of course, force them to do a lot of tasks of the same type in order to remember well, but at the same time their motivation decreases. In the process, it turned out that the girls like the ordinary scratch cat more than a garland, so they now do my usual course without any pieces of iron. After the fifth grade, I took a third, with him, in general, all the same. In general, the experience is very successful, now I know that even for classes with younger students you do not need special pedagogical abilities, you just need to give them an interesting subject to study, and they will do everything themselves.
OPCFW_CODE
[Freedombox-discuss] What is Freedombox? (Was: CAs and cipher suites for cautious servers like FreedomBox) Quoting Petter Reinholdtsen (2013-09-14 09:13:07) > [cgw993 at aol.com] > > Sorry for the basic question but is Freedombox considered to be a > > collection of hardware or software or is it the name of the project > > itself? > It is a project and a vision to create a home server allowing its > users privacy and securicy for its data and communcation. Part of the > project os to integrate or build the software needed for this, and > part of the project is to find/create a useful hardware platform to > make it easy for non techincal people to get access to the Freedombox I agree with above, but am converned with the high risk of misinterpreting the "build the software needed" part: I believe strongly that the FreedomBox should not contain any code written specifically for FreedomBox. FreedomBox should consist only of code already in common use among geeks. Reason is, that the target users are non-geeks who are not expected to be capable of understanding the code, e.g. notice if it misbehaves. Essentially, a FreedomBox is a server with *no* administrator and no eyeballs on the code. As a concrete example, it worries me if a GUI is invented and promoted as being the "Front End for Freedom Plug UI". That might seem compelling to deploy on FreedomBox, but wait a minute: when promoted so directly at the FreedomBox project, it conversely means it is less likely that other related projects, e.g. emdebian router projects or Debian-based phone projects or whatever, would consider reusing same tool for *their* projects, thereby enhancing the quality assurance inherent in Free Software through the logic of "given enough eyeballs, all bugs are shallow". Given *few* enough *geeky* eyeballs, all bugs are still lurking. We shall eat our own dogfood. We shall only ship to our non-geek friends what we use and trust ourselves, and only those parts that are boring to us because they just work correctly all the time. > > Q #2 - Would it be essentially impossible or completely impractical > > for the freedombox to contain only free software, the firmware, > > drivers, algorithms, code, everything free? The device cannot be > > secured if it contains any non free software(code, firmware, > > libraries, anything) right? > It is an expressed goal of the project to find a hardware platform > without any non-free parts. Yes. It has also been emphasized that FreedomBox is not tied to a single piece of hardware but a certain _kind_ of hardware. Even if us developers work mostly on one or a few specific hardware boards now, that only means it is easier for us to develop that way - over time we expect more devices to emerge on the market that fits the criteria of running Debian, being cheap, low-power and silent, and preferrably being Open Hardware. NB! I speak for myself here, as enthusiasticly involved developer. * Jonas Smedegaard - idealist & Internet-arkitekt * Tlf.: +45 40843136 Website: http://dr.jones.dk/ [x] quote me freely [ ] ask before reusing [ ] keep private -------------- next part -------------- A non-text attachment was scrubbed... Size: 490 bytes
OPCFW_CODE
/* global ResizeSensor */ /* eslint-disable ember/no-observers */ const TABLE_POLYFILL_MAP = new WeakMap(); class TableStickyPolyfill { constructor(element) { this.element = element; this.element.style.position = 'static'; this.side = element.tagName === 'THEAD' ? 'top' : 'bottom'; this.setupRaf = requestAnimationFrame(this.repositionStickyElements); this.setupResizeSensors(); this.setupRowMutationObservers(); this.mutationObserver = new MutationObserver(() => { this.teardownResizeSensors(); this.teardownRowMutationObservers(); this.setupResizeSensors(); this.setupRowMutationObservers(); this.repositionStickyElements(); }); this.mutationObserver.observe(this.element, { childList: true }); } destroy() { this.element.style.position = 'sticky'; cancelAnimationFrame(this.setupRaf); this.teardownResizeSensors(); this.teardownRowMutationObservers(); this.mutationObserver.disconnect(); } setupRowMutationObservers() { let rows = Array.from(this.element.children); this.rowMutationObservers = rows.map(row => { let observer = new MutationObserver(this.repositionStickyElements); observer.observe(row, { childList: true }); return observer; }); } teardownRowMutationObservers() { this.rowMutationObservers.forEach(observer => observer.disconnect()); } setupResizeSensors() { let rows = Array.from(this.element.children); let firstCells = rows.map(r => r.firstElementChild); this.resizeSensors = firstCells.map(cell => { let sensor = new ResizeSensor(cell, this.repositionStickyElements); return [cell, sensor]; }); } teardownResizeSensors() { this.resizeSensors.forEach(([cell, sensor]) => sensor.detach(cell)); } repositionStickyElements() { let table = this.element.parentNode; let scale = table.offsetHeight / table.getBoundingClientRect().height; let rows = Array.from(this.element.children); let orderedRows = this.side === 'top' ? rows : rows.reverse(); let offset = 0; let heights = orderedRows.map(r => r.getBoundingClientRect().height * scale); for (let i = 0; i < orderedRows.length; i++) { let row = orderedRows[i]; let height = heights[i]; for (let child of row.children) { child.style.position = '-webkit-sticky'; child.style.position = 'sticky'; child.style[this.side] = `${offset}px`; } offset += height; } } } export function setupTableStickyPolyfill(element) { TABLE_POLYFILL_MAP.set(element, new TableStickyPolyfill(element)); } export function teardownTableStickyPolyfill(element) { TABLE_POLYFILL_MAP.get(element).destroy(); TABLE_POLYFILL_MAP.delete(element); }
STACK_EDU
0.7.5 loop redirect after upgrade I get loop redirect in Safari browser after upgrade from 0.7.3 to 0.7.5 19:35:50.873 [info] Migrations already up 19:35:55.346 [info] Password for user specified by ADMIN_EMAIL reset to DEFAULT_ADMIN_PASSWORD! 19:35:58.747 [info] Running FzHttpWeb.Endpoint with cowboy 2.9.0 at <IP_ADDRESS>:13000 (http) 19:35:58.755 [info] Access FzHttpWeb.Endpoint at https://firezone.example.com 19:36:03.081 request_id=FzxqDGz2-LZGD5cAAAJh [info] GET /users 19:36:03.635 request_id=FzxqDGz2-LZGD5cAAAJh remote_ip=myip [info] Sent 302 in 554ms 19:36:03.697 request_id=FzxqDJINHCxRg-UAAAKR [info] GET / 19:36:03.721 request_id=FzxqDJINHCxRg-UAAAKR remote_ip=myip [info] Sent 302 in 23ms 19:36:03.773 request_id=FzxqDJaZLMigQ0wAAAKh [info] GET /users 19:36:03.776 request_id=FzxqDJaZLMigQ0wAAAKh remote_ip=myip [info] Sent 302 in 2ms 19:36:03.833 request_id=FzxqDJolE6qaVooAAALB [info] GET / 19:36:03.835 request_id=FzxqDJolE6qaVooAAALB remote_ip=myip [info] Sent 302 in 1ms 19:36:03.891 request_id=FzxqDJ2kNC30BMIAAALR [info] GET /users 19:36:03.895 request_id=FzxqDJ2kNC30BMIAAALR remote_ip=myip [info] Sent 302 in 3ms 19:36:03.945 request_id=FzxqDKDcff0z2t4AAALx [info] GET / 19:36:03.948 request_id=FzxqDKDcff0z2t4AAALx remote_ip=myip [info] Sent 302 in 2ms 19:36:04.006 request_id=FzxqDKRzwrDoyxgAAAMB [info] GET /users 19:36:04.008 request_id=FzxqDKRzwrDoyxgAAAMB remote_ip=myip [info] Sent 302 in 2ms 19:36:04.065 request_id=FzxqDKf8TWZn0MQAAAMh [info] GET / 19:36:04.066 request_id=FzxqDKf8TWZn0MQAAAMh remote_ip=myip [info] Sent 302 in 1ms 19:36:04.123 request_id=FzxqDKt1MQKnCRQAAAMx [info] GET /users 19:36:04.126 request_id=FzxqDKt1MQKnCRQAAAMx remote_ip=myip [info] Sent 302 in 2ms 19:36:04.177 request_id=FzxqDK6jhru6lWgAAANR [info] GET / 19:36:04.178 request_id=FzxqDK6jhru6lWgAAANR remote_ip=myip [info] Sent 302 in 1ms 19:36:04.235 request_id=FzxqDLIhB5zas_sAAANh [info] GET /users 19:36:04.237 request_id=FzxqDLIhB5zas_sAAANh remote_ip=myip [info] Sent 302 in 1ms 19:36:04.288 request_id=FzxqDLVOCqcsV5IAAAOB [info] GET / 19:36:04.290 request_id=FzxqDLVOCqcsV5IAAAOB remote_ip=myip [info] Sent 302 in 1ms 19:36:04.348 request_id=FzxqDLja9bKSuNMAAAOR [info] GET /users 19:36:04.350 request_id=FzxqDLja9bKSuNMAAAOR remote_ip=myip [info] Sent 302 in 2ms 19:36:04.407 request_id=FzxqDLxb_uQlz_wAAAOx [info] GET / 19:36:04.408 request_id=FzxqDLxb_uQlz_wAAAOx remote_ip=myip [info] Sent 302 in 1ms 19:36:04.467 request_id=FzxqDL_wKKcOC70AAAPB [info] GET /users 19:36:04.469 request_id=FzxqDL_wKKcOC70AAAPB remote_ip=myip [info] Sent 302 in 2ms 19:36:04.518 request_id=FzxqDML8zNUQCZcAAAPh [info] GET / 19:36:04.520 request_id=FzxqDML8zNUQCZcAAAPh remote_ip=myip [info] Sent 302 in 2ms 19:36:04.578 request_id=FzxqDMaSOBbuYbIAAAPx [info] GET /users 19:36:04.580 request_id=FzxqDMaSOBbuYbIAAAPx remote_ip=myip [info] Sent 302 in 1ms before the update, I was logged in as a user who has configured 2fa if open other browser - all good, run new login process Deleting cookie _fz_http_key in developer mode fixed the problem, but it doesn't look like a slick upgrade option and usually isn't what the average user would expect. @ihard I'm sorry that your faced this issue, this is due to some changes in session storage. @jamilbk the issue is here. I'm thinking about how we can make a fix that will make sure that something like this would never happen and an idea is to add a Firezone version as a suffix to all cookie encryption secrets. It means that on the release of a new version, all users will be unauthenticated. This is the most viable option because LiveView can't affect user sessions, so from that place where the bug lives we don't have a lot of options, even redirecting to /sign_out won't work because it's a DELETE method. WDYT? @AndrewDryga sgtm. Have hit this once or twice before. Maybe the session key could include some random hardcoded suffix string we could choose to update in PRs that affect sessions. But could be a pain to remember to update it… version works too.
GITHUB_ARCHIVE
Improved Color Picker Version 8.0 makes it easier to select colors, or define custom colors, for calendar items such as events, tasks, notes, categories, priorities, and locations. To learn how these colors can be used to help you visualize your schedule, see the FAQ. Clicking the color picker will show a drop-down grid of common colors, along with buttons for setting the color to be transparent or to pick from more colors. Additional colors can be quickly selected via a color wheel, with sliders for adjusting the saturation and brightness of the color. The appearance settings have been updated to use the new color picker, giving a cleaner look to the interface. The "Random" button has also been re-labeled "Splash", to make it more fun and clarify its intent - which is to splash the appearance with a random color combination (click it several times to find a combination you like). These new appearance settings are consistently used throughout the program, wherever color or appearance properties can be specified. Using the New Color Picker - An Example Walkthrough To select a different color, click the color's box. A drop-down grid of common colors will be shown. If you don't want a background color, you can select "Transparent" for both the primary and secondary background colors. This will result in only the associated item's text being shown in the calendar, with no block around the text. To select from 16.7 million possible colors, click the "More Colors..." button at the bottom of the drop-down. This will result in the following window appearing: To select a different color, simply click anywhere in the color wheel. The slider to the right of the color wheel allows the brightness of the color to be adjusted, and a preview of the currently selected color is shown below the OK and Cancel buttons. The sliders at the bottom of the window also be used to adjust the color. If you know the extact red, green, and blue numbers of the color you're looking for, you can enter the numbers directly into the boxes along the right side of the sliders. In the example below, the color was adjusted to a slightly-diluted red: When finished, click the OK button. The selected color will appear in the color box. If using a gradient to blend the colors, the secondary color will probably also need to be adjusted. In the example below, the newly selected red and previous blue just don't look good together. After clicking the "More Colors" button for the secondary color, the "Select Color" window appears. In this example, I want the secondary color to be a darker shade of the primary color. Here's where the gray box labeled "[Click here and drag to any location to capture a color from the screen]" can make life easier. Instead of trying to manually select the same hue as the primary color, simply click the gray box and drag the ink dropper to the primary color's box. After dragging, release the mouse button to capture the color. You can also drag to anywhere else on the screen to capture a color. As indicated above, the next step in this example is to reduce the brightness by dragging the brightness bar down, as shown below. After clicking the OK button, voila! A nicely colored gradient is shown using the custom colors selected above. Many other gradient styles can be selected from a drop-down list. A preview of each gradient style will also be shown, using the selected colors. Version 8.0 improves the selection of item colors, making it easier than ever to customize exactly how you want items to appear on the calendar. The new color-picker control rivals or exceeds the ease-of-use of color selection tools in expensive graphic editing tools, but is just one small component of the overall VueMinder experience.
OPCFW_CODE
package com.redhat.fisdemoblockchain; import java.util.HashMap; import java.util.Map; import com.redhat.fisdemoblockchain.exception.AccountFormatException; import com.redhat.fisdemoblockchain.exception.NoAccountFoundException; public class MockBitcoinApp { private Map<String, Integer> accounts = new HashMap<String, Integer> (); private Ledger ledger = new Ledger(); public MockBitcoinApp(){ accounts.put("234567", 1000); accounts.put("567890", 2000); accounts.put("123456", 3000); accounts.put("789012", 3500); accounts.put("345678", 4000); accounts.put("901234", 4500); accounts.put("456789", 300); accounts.put("678901", 900); accounts.put("890123", 600); } public Integer getBalance(String acctid) throws NoAccountFoundException, AccountFormatException{ if(acctid.length()<6) throw new AccountFormatException(); Integer balance = accounts.get(acctid); if(balance == null) throw new NoAccountFoundException(); return balance; } public String transfer(String inputString){ String[] inputs = inputString.split(","); if(inputs.length < 3) return "Transfer FORMAT ERROR!!!!"; Integer amt; try{ amt = Integer.valueOf(inputs[1]); }catch(Exception e){ return "Transfer AMT FORMAT ERROR!!!!"; } return this.transfer(inputs[0], amt, inputs[2]); } public String transfer(String acctid, int amt, String recptid){ Integer acctBalance = accounts.get(acctid); Integer recptBalance = accounts.get(recptid); if(acctBalance != null && recptBalance != null){ accounts.put(acctid, acctBalance-amt); accounts.put(acctid, recptBalance+amt); }else{ return "Transfer: ONE of the account NOT FOUND!"; } ledger.transfer(acctid, amt, recptid); return "Transfered Completed! $"+amt+" from "+acctid+" to "+recptid+" the remaining balance is: $"+(acctBalance-amt); } }
STACK_EDU
The construction of campus network has provided an advanced comprehensive information environment for teaching, scientific research and management of colleges. In the process of digitization and intelligentization, the data produced by all kinds of application systems in college are growing, and the large data environment of campus has been formed. Big data of college contain abundant information, so we need to use new data storage and analysis tools to store and analyze huge amounts of college data and get useful information from them. In this paper, a depth learning analysis algorithm based on Map Reduce is proposed to deal with college data. Using Map Reduce parallel computing framework to achieve campus data computing, we studied the analysis and application systems of campus big data in different themes and levels and dug out valuable information hidden behind college data. The experimental results show that the high school data mining algorithm based on Map Reduce is effective. It provides new research ideas for large data mining in colleges and provides technical reference for the construction of smart campus. This is a preview of subscription content, access via your institution. Buy single article Instant access to the full article PDF. Price includes VAT (USA) Tax calculation will be finalised during checkout. Liu, F., Shen, C., Lin, G., et al. (2016). Learning depth from single monocular images using deep convolutional neural fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(10), 2024. Baleia, J., Santana, P., & Barata, J. (2015). On exploiting haptic cues for self-supervised learning of depth-based robot navigation affordances. Journal of Intelligent and Robotic Systems, 80(3), 1–20. Lazer, D., Kennedy, R., King, G., et al. (2014). Big data. The parable of Google Flu: Traps in big data analysis. Science, 343(6176), 1203. Fan, J., Han, F., & Liu, H. (2014). Challenges of big data analysis. National Science Review, 1(2), 293–314. Sandryhaila, A., & Moura, J. M. F. (2014). Big data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure. IEEE Signal Processing Magazine, 31(5), 80–90. Shim, K. (2013). MapReduce algorithms for big data analysis. Proceedings of the VLDB Endowment, 5(12), 2016–2017. Alyass, A., Turcotte, M., & Meyre, D. (2015). From big data analysis to personalized medicine for all: challenges and opportunities. BMC Medical Genomics, 8(1), 33. Zhang, Y., Chen, M., Mao, S., et al. (2014). CAP: Community activity prediction based on big data analysis. Network IEEE, 28(4), 52–57. Mohammed, E. A., Far, B. H., & Naugler, C. (2014). Applications of the MapReduce programming framework to clinical big data analysis: current landscape and future trends. BioData Mining, 7(1), 22. Yoo, C., Ramirez, L., & Liuzzi, J. (2014). Big data analysis using modern statistical and machine learning methods in medicine. International Neurourology Journal, 18(2), 50. Park, J., Baek, Y. M., & Cha, M. (2014). Cross-cultural comparison of nonverbal cues in emoticons on Twitter: Evidence from big data analysis. Journal of Communication, 64(2), 333–354. Song, T. M., Song, J., An, J. Y., et al. (2014). Psychological and social factors affecting internet searches on suicide in Korea: A big data analysis of Google search trends. Yonsei Medical Journal, 55(1), 254–263. Belaud, J. P., & Dupros, F. (2014). Collaborative simulation and scientific big data analysis: Illustration for sustainability in natural hazards management and chemical process engineering. Computers in Industry, 65(3), 521–535. Medeiros, B. C., Satram-Hoang, S., Hurst, D., et al. (2015). Big data analysis of treatment patterns and outcomes among elderly acute myeloid leukemia patients in the United States. Annals of Hematology, 94(7), 1127–1138. Song, T. M., & Ryu, S. (2015). Big data analysis framework for healthcare and social sectors in Korea. Healthcare Informatics Research, 21(1), 3–9. About this article Cite this article Zhang, W., Jiang, L. Algorithm Analysis for Big Data in Education Based on Depth Learning. Wireless Pers Commun 102, 3111–3119 (2018). https://doi.org/10.1007/s11277-018-5331-3 - Map Reduce - Depth learning - College data - Algorithm analysis
OPCFW_CODE
My name is Lirong Xue. It’s my honor to serve as the president of ACSSPU. My name is Kun (Kyle) Lu, a first year PhD student from Department of Operation Research and Financial Engineering (ORFE). My hometown is Hefei, Anhui. I am a “fake” arty youth, who has many hobbies but not very skilled at most of them. I am glad to work with these intelligent guys in ACSSPU, and I hope I can meet more friends here~ Hi, I’m Dake Li from Zhejiang Province, who is a first-year Ph.D. student in Economics. I received my Bachelor’s degree at Peking University. I love a bunch of sports including badminton and basketball, and I pretty enjoying making new friends with interesting people on courts. ACSSPU is a splendid place where I can devote myself to helping more people, so I feel lucky to be one of the members. Hi I’m Guanhua He from Xi’an, Shaanxi. I graduated from Tsinghua University, now I’m a Ph.D student at Molecular Biology. I like all kinds of outdoor activities especially skiing and hiking. Very glad that I can work in ACSSPU, and I hope to serve the Chinese community at Princeton well. Born in Jiangsu Province, China, graduated from Peking University in 2015, now in MAE department. A jack of all trades but master of none. Like to shoot craps. My name is Weibo Fu. I serve as the treasurer of ACSSPU now. I have various interests including travelings, hot pots, video games and Japanese animes. 熊婉茹,生活部,人口系一年级博士生,本科毕业于北京大学元培学院。来自大吃省,没吃到好吃的会不开心,从小就和食堂打饭大叔/大妈/小哥关系好,终于有一天能成为卖饭MM感到非常荣幸!“今天吃什么同学?ACSSPU Chinese Lunch管够~~” Wanru Xiong, first year Ph.D. student in the Program in Population Studies. I am in charge of the weekly Chinese Lunch in ACSSPU. Do come and enjoy Chinese food with friends! Yiran Fan (Irene), first year Master’s student in COS. She is passionate about singing, song writing, other music stuff, and imagining her life as a cat lady. Loves the art of bullshit and wasting time. Is known for her inability to handle spicy food. Lily. 1st year Master of Finance. Went to Wesleyan University for undergrad, and then worked in a hedge fund for 2 years. Hi! This is Athena Li, a first-year Ph.D. student at the Department of Molecular Biology. I graduated from School of Life Sciences, Tsinghua University in 2016. With pretty broad interests, I am always open to new experiences. It’s my great honor to join the family of ACSSPU. Hope we can have fun together here XD I graduated from Tsinghua University and I am currently a PhD candidate in Mechanical and Aerospace Engineering at Princeton. I’m Weikun Yang, a PhD student in Computer Science. I graduated from Peking University in 2015, and I enjoy skiing, swimming as well as playing Kerbal Space Program and PS4 in my free time. Currently I manage our website. I am Hao Lu, graduated from math department, Tsinghua university. Nice to meet you. Lingzhi Cai. First year PhD student. Chemical engineer. Too weird to live, too rare to die. I am a Master in Finance student graduating in 2018. I came from Nanjing, and earned a bachelor’s degree in Mathematics and Economics from Nanyang Technological University. Like fuzzy animals and running. I assist the maintenance of WeChat account, newsletter and Facebook page. Hey I’m Mingyu, a first-year Neuroscience PhD, love all kinds of sports (though not good at any), always enthusiastic about food (though don’t know how to cook) and interested in popular science. I’m Linfeng Zhang, one of the sports ministers. I wish to “brutalize the physique and civilize the mind” with all my friends here at Princeton.
OPCFW_CODE
Written by: Thomas Padilla Primary Source: Thomas Padilla During graduate school I visited my fair share of archives. Living on funds dispensed from the FAFSA gods in combination with whatever part-time job I had, I often found myself hard-pressed to pony up money for photocopies. Somewhere along the line I got smarter and started using a point and shoot camera to gather as much primary source mana as possible. Since that time smart folks have written great posts geared toward helping people engaged in similar work. Robin Camille Davis has a nice post on using an iPhone, a ruler, and a couple stacks of books as an impromptu digitization station. Miriam Posner shows us how to batch process photos. The University of Illinois Libraries maintain a really useful guide on digital tools for archival research. Building on these foundations I am going to describe how you can generate plain text data from images of archival documents. With plain text data in hand you’ll be well provisioned for engaging a number of different Digital Humanities methods. The focus here is on extracting plain text data from images of print based archival content using optical character recognition(OCR). The result likely won’t be highly accurate, but I’d argue that “just good enough” is all you need to begin exploring your sources. What You Need – Macports – package management system, basically makes it easier for you to install software – Tesseract – open source software used to perform optical character recognition – Scanner / Camera – capture images in TIFF format – Download Macports for your version of OS X – Double click .pkg file and follow instructions Installing Tesseract (may or may not require installation of Xcode Command Line Tools) – Open Terminal – Enter the following command in Terminal: sudo port install tesseract-eng This installation supports English language materials. To engage other languages, see documentation on additional language support. Scanner / Camera Selection Most scanners and copiers will provide an option to output to the TIFF format. If you are in an archive with one of those fancy overhead scanners with slot for USB drive, chances are TIFF is an option. Your life is charmed. Camera selection is a bit trickier. Most DSLRs have a TIFF option. If you can’t spring for a DSLR some sleuthing seems to indicate that its possible to purchase phone apps that allow capture of images in TIFF format – fair warning – I haven’t tried the app. Use Case – Generate Plain Text Data from a File using OCR You visit an archive. Document of interest appears. It isn’t available anywhere else. You want to (1) capture an article from it (2) generate plain text data for potential use in a Digital Humanities project. 1. Scan or Photograph document, output to TIFF format, crop if needed 2. Open Terminal 3. In Terminal navigate to folder where file is located: e.g. cd Desktop/forum 4. Once in folder, enter the following command in Terminal: tesseract yourfilename.tiff yourfilename (Tesseract will generate a plain text file derived from the TIFF) 5. Compare the original image to the plain text file – not likely to be perfect match but probably just good enough Use Case 2 – Generate Plain Text Data from Multiple Files using OCR You visit an archive. Documents of interest appear. They aren’t available anywhere else. You want to (1) capture multiple documents (2) generate plain text data for potential use in a Digital Humanities project. 1. Scan or Photograph documents, output to TIFF format, crop if needed, create a folder to store these files 2. Open Terminal 3. In Terminal navigate to folder where files are located: e.g. cd Desktop/forum/1928_Forum 4. Once in folder, enter the following command in Terminal: for i in *.tiff; do tesseract $i yourfoldername_$i; done; 5. Review original images and plain text files. There you go. “Just good enough” OCR for plain text data generation from archival resources. This is a low barrier approach that will generate data that is up to the task of enabling digitally inflected exploration of sources. Of course “just good enough” is a moving target. As your research project progresses and you need higher degrees of accuracy in the OCR consider the resources below. Latest posts by Thomas Padilla (see all) - Data Praxis in the Humanities: Function and Ethic - April 7, 2016 - Humanities Data in the Library: Integrity, Form, Access - March 16, 2016 - Three-Dimensional Science Fiction: Visualizing IF Magazine Covers (1952-1974) - March 2, 2016
OPCFW_CODE
What is the formatter/syntax highlighter to work with ejs (embedded javascript) templates? For the project I use EJS (http://ejs.co/) as express templating engine. It is good and easy to use, but editing files is a bit of a problem - text highlight is worse than average and I could not find any tools to auto format files. My main IDE is VS Code and I have tried Atom. For VS code I used QassimFarid.ejs-language-support, which has way more installs than all others and I found it lacking. Tried DigitalBrainstem.javascript-ejs-support, but it is getting confused with my code pretty soon too. For Atom I go with language-ejs (atom.io/packages/language-ejs) which is very good. Sadly, both VS Code and Atom do not have any autoformat capabilities (read packages), and pure JS formatters are confused with ejs markup. What do you use to format your ejs templates? P.S. I do not mind trying some other IDEs/editors or even some standalone formatters. Your question is quite subjective, so I can't really offer a definitive answer. But I get the feeling you're just using the base text editor rather than installing packages. Both VS Code and Atom come with package systems. In VS Code you can click on the "Extensions" icon on the left hand panel (under the debug item). This will let you search for extensions that offer things such as auto-complete and syntax highlighting. Just by typing in "ejs" there are 4 different syntax highlighting and evaluators. Atom also has a similar system to search for and install packages. From here you should be able to find one which suits your needs for EJS. Hallo Elliot! Thanks for answering. For VS code I used QassimFarid.ejs-language-support, which has way more installs than all others. I found it lacking. Tried DigitalBrainstem.javascript-ejs-support, but it is getting confused with my code pretty soon too. They both are nice, though, for simple templates. And they both do not provide an option to format the file. Failing to find right extensions, for favorite IDEs I was hoping that there are some other editors that have them. For Atom I go with language-ejs (https://atom.io/packages/language-ejs) which is very good, but do not have any formater And even the best atom-beautify does not work as well as I hoped (as it does for pure JS files, for example) Based on all of that, I think you should update your question to contain those requirements. At the moment you've just asked for a very subjective answer, you'd be better off asking for editors (or editor packages) which meet all of those needs.
STACK_EXCHANGE
Sing for me, my angel of music!I should note that Ubuntu Studio didn't officially publish and test the Alpha1 ISO image. But that isn't to say there weren't some stuffs going on :) Even without a published ISO image, pre-alpha1 did see some work. We fix some issues including: - adding 'indicator-sound-gtk2' to the seeds the give the sound indicator visibility again - removed 'dssi-vst' from the seeds to allow the meta packages to build due to a multi-arch library dependency issue - work on defining scope and content for the new website - resolved a "libavcodec-extra-53 conflicts with libavcodec53" conflict - began testing on a -lowlatency kernel for the repositories Currently I'm trying to get input from graphic artists and designers. By getting to understand what tasks graphic artists and designers want to accomplish, the applications used, and the work flow they use, Ubuntu Studio can provide much better support for them. I've already reached out the Libre Graphics Magazine people, heathenx, and Richard Querin because I like and respect what these people do. I would love to have more input though! If you do any graphics tasks, please send poke me with some information about what you do, what you use to do it, and the generalized work flow for it. I really want to provide better support for graphic artists and designers. Work on the new website continues and probably will continue for several weeks. Kernel testing continues. Although I am reminded that I need to send Steve and email. I have been working on a specification for new artwork for the Ubuntu Studio plymouth theme, lightdm theme, and desktop wallpaper. A major goal of this is develop a coherant thematic presence across all these images and also to adjust the color tone for the new theme. Which leads into... Finally, the last of the current progress is making preparations for transitioning to the Xubuntu theme settings which should be the majority of the remaining transition to XFCE. Additional changes will be made, so it will not be the exact default Xubuntu settings, but it shall be very close. Some aspects of the settings transition will probably continue for several weeks. I imagine this will at least include the plymouth and lightdm greeter themes. When kernel testing is at a certain milestone, we should be pushing the -lowlatency kernel package to REVU so that it may be reviewed for entry into the repository. Hopefully we can get this into REVU before Alpha 3. And we also will be beginning preparations for transitioning Ubuntu Studio to a liveDVD image but I expect this will not happen until after Alpha 2. Exciting times building up to an awesome LTS release!
OPCFW_CODE
Because so many people (myself included) seem to have obtained ARCnet cards without manuals, this file contains a quick introduction to ARCnet hardware, some cabling tips, and a listing of all jumper settings I can find. Please e-mail me with any settings for your particular card, or any other information you have! ARCnet is a network type which works in a way similar to popular Ethernet networks but which is also different in some very important ways. First of all, you can get ARCnet cards in at least two speeds: 2.5 Mbps (slower than Ethernet) and 100 Mbps (faster than normal Ethernet). In fact, there are others as well, but these are less common. The different hardware types, as far as I'm aware, are not compatible and so you cannot wire a 100 Mbps card to a 2.5 Mbps card, and so on. From what I hear, my driver does work with 100 Mbps cards, but I haven't been able to verify this myself, since I only have the 2.5 Mbps variety. It is probably not going to saturate your 100 Mbps card. Stop complaining. :) You also cannot connect an ARCnet card to any kind of Ethernet card and expect it to work. There are two "types" of ARCnet - STAR topology and BUS topology. This refers to how the cards are meant to be wired together. According to most available documentation, you can only connect STAR cards to STAR cards and BUS cards to BUS cards. That makes sense, right? Well, it's not quite true; see below under "Cabling." Once you get past these little stumbling blocks, ARCnet is actually quite a well-designed standard. It uses something called "modified token passing" which makes it completely incompatible with so-called "Token Ring" cards, but which makes transfers much more reliable than Ethernet does. In fact, ARCnet will guarantee that a packet arrives safely at the destination, and even if it can't possibly be delivered properly (ie. because of a cable break, or because the destination computer does not exist) it will at least tell the sender about it. Because of the carefully defined action of the "token", it will always make a pass around the "ring" within a maximum length of time. This makes it useful for realtime networks. In addition, all known ARCnet cards have an (almost) identical programming interface. This means that with one ARCnet driver you can support any card, whereas with Ethernet each manufacturer uses what is sometimes a completely different programming interface, leading to a lot of different, sometimes very similar, Ethernet drivers. Of course, always using the same programming interface also means that when high-performance hardware facilities like PCI bus mastering DMA appear, it's hard to take advantage of them. Let's not go into that. One thing that makes ARCnet cards difficult to program for, however, is the limit on their packet sizes; standard ARCnet can only send packets that are up to 508 bytes in length. This is smaller than the Internet "bare minimum" of 576 bytes, let alone the Ethernet MTU of 1500. To compensate, an extra level of encapsulation is defined by RFC1201, which I call "packet splitting," that allows "virtual packets" to grow as large as 64K each, although they are generally kept down to the Ethernet-style 1500 bytes. For more information on the advantages and disadvantages (mostly the advantages) of ARCnet networks, you might try the "ARCnet Trade Association" WWW page: http://www.arcnet.com
OPCFW_CODE
After my first expereince with OpenOffice V3 Macro has been a complete failure I investigated a little further and made a second experience: here I learned that the sample macros provided do work – but only for text documents, not for spreadsheets. I still have been working on trying to get one of my simple sample MS Excel macros to work with OpenOffice V3. Both applications – and if we include Lotus Symphony there would be three – support BASIC as a language to write macros. The challenge lies in the different object models used by these applications how to access services and how to access documents and parts of a document like sheets or cells in a spreadsheet. Let me explain first what my MS Excel macro is supposed to do. Basically it is supposed to convert something like this shown here on the left side to something like this shown here on the right side: Imagine you receive a spreadsheet in which some columns are used for category or grouping values and to make it better readable the value is only present when it changes. This is nice unless you want to sort this spreadsheet by the first column which would create an incorrect output or unless you like to use that spreadsheet as an input for a database import which would cause problems due to missing values in the first column. Here is how I solved this in MS Excel by writing this VBA macro: 1: Sub CompleteCategoryRow() 2: Dim c As Long 3: Dim r As Long 4: Dim contents As String 5: If Selection.Rows.Count <= 1 Then 6: MsgBox "Nothing selected.", vbOKOnly 7: End If 8: contents = " " 9: For c = 1 To Selection.Columns.Count 10: For r = 1 To Selection.Rows.Count 11: If Selection.Cells(r, c) <> "" And Selection.Cells(r, c) <> " " Then 12: contents = Selection.Cells(r, c) 14: Selection.Cells(r, c) = contents 15: End If 16: Next r 17: Next c 18: End Sub Little side comment: to still get a better readability we could add the following line right after the Else statement’: Selection.Cells(r, c).Font.Color = RGB(255, 255, 255). This would still add the contents but display it with white color. So as long as your background color is white you wouldn’t see it. Trying to run my MS Excel macro in OpenOffice V3 simply would generate an error as shown below: Since there is no easy way to convert those macros from MS Excel to OpenOffice I noticed this subject is going to become more complicated and I started to use a different approach in solving the problem: reading. Reading books is supposed to make you smart, isn’t it ? Somewhere I found this book being recommended: “Learn OpenOffice.org Spreadsheet Macro Programming: OOoBasic and Calc Automation: A Fast and Friendly Tutorial to Writing Macros and Spreadsheet Applications” by Mark Alexander Bain. Chapter 1 on the book describes how to work with the IDE of OpenOffice, chapter 2 how macros are organized – in libraries, modules, sub routines and functions – and chapter 3 is about the OpenOffce Object Model. I found this chapter quiet hard to digest. It introduces UNOs—OpenOffice.org’s Universal Network Objects —and then it is about clients, interfaces, methods and services, things I actually did not really want to read about before being enabled to come up with my first OpenOffice macro. In Chapter 4 it is getting more practical and the first working macro is shown in sub chapter “Manipulating Spreadsheet Cells” – not a “Hello World” one, but one which opens up a blank spreadsheet and displays current date and time in the first cell of this sheet. The follow-on chapters deal with subjects I might be interested in at a later time, like formatting sheets, working with databases, working with other documents, developing dialogs and creating a complete application. Chapter 10 sounds interesting and is about Using Excel VBA in OpenOffice, something for me to read in more detail soon. It mentions that there are plans that a future version of OpenOffice ( written after the book has been published, so may be this is V 3 already ? ) might contain VBA support. The sample macro from chapter 4 was a good start for me but did not resolve major questions I have had at that point, like especially - how to work on the actual spreadsheet already opened, - and how to access a range in that spreadsheet currently selected. The book also contains a link to the OpenOffice API documentation site and some hints how to use that. I googled through dozen of pages in the internet to find a way how to access a selected range in the current active sheet. I finally learned about the getCurrenSelection function in the XModel interface, but it turned out that this API documentation is insufficient: no examples provided how to use it, even no details about what type of object this function would return. After more googling I was almost ready to give up for now until I got the idea to do what I sometimes do when there seems to be no information available about a particular program: I debug it. The need to start debugging is an indicator that knowledge management has failed so far in providing to you the knowledge you need. Thus: time to find out on your own and may be contributing to the incomplete body of knowledge out there in this particular realm. The OpenOffice IDE allows to run a macro step by step and to add watch points. Using that feature I explored the object returned by the getCurrenSelection function and saw that it provides an attribute DataArray giving access to all the cells in a selected range. With that previously missing piece of knowledge discovered through code debugging I was able to finally come up with this OpenOffice V3 macro BASIC code supposed to do exactly the same thing than my MS Excel macro: 1: Sub CompleteCategoryRow 2: xSelection = thisComponent.getCurrentSelection() 3: If xselection.Rows.Count <= 1 Then 4: MsgBox "Nothing selected." 5: End If 6: contents = " " 7: Data() = xselection.getDataArray() 8: For c = 1 To xselection.Columns.Count 9: For r = 1 To xselection.Rows.Count 10: If Data(r-1)(c-1) <> "" And Data(r-1)(c-1) <> " " Then 11: contents = Data(r-1)(c-1) 13: Data(r-1)(c-1) = contents 14: End If 15: Next r 16: Next c 18: End Sub And, Eureka!, it works ! My first OpenOffice V3 Spreadsheet macro is running, as you can see in this screen cast! Pure VBA code works as well in OpenOffice V 3 ! Simply adding “Option VBASupport 1” to the original MS Excel VBA code does the trick, as I learned from chapter 10 of the book I mentioned above. Thus: OpenOffice V 3 includes Microsoft VBA support ! The code shown below works as well and does exactly the same thing than the code I showed above: 1: Option VBASupport 1 3: Sub CompleteCategoryRowVBA 4: Dim c As Long 5: Dim r As Long 6: Dim contents As String 7: If Selection.Rows.Count <= 1 Then 8: MsgBox "Nothing selected.", vbOKOnly 9: End If 10: contents = " " 11: For c = 1 To Selection.Columns.Count 12: For r = 1 To Selection.Rows.Count 13: If selection.Cells(r, c) <> "" And Selection.Cells(r, c) <> " " Then 14: contents = Selection.Cells(r, c) 16: Selection.Cells(r, c) = contents 17: End If 18: Next r 19: Next c 20: End Sub
OPCFW_CODE
[BUG] Fix DeltaTableBuilder metadata configuration Bug Describe the problem Currently, when DeltaTableBuilder is given a property that does not start with a delta. prefix, then that property is written to the table metadata in all lowercase. Steps to reproduce io.delta.tables.DeltaTable.create() .addColumn("bar", StringType) .property("dataSkippingNumIndexedCols", "33") .property("delta.deletedFileRetentionDuration", "interval 2 weeks") .tableName("my_table") .execute() Observed results This will correctly write delta.deletedFileRetentionDuration to the delta log metadata, but will incorrectly write dataskippingnumindexedcols to the delta log metadata, too. Expected results We only want properties that are prefixed with delta. to be written to the delta log, and we obviously want to preserve their case. dataskippingnumindexedcols, nor dataSkippingNumIndexedCols should be written to the metadata. Implementation Requirement In addition to fixing the above issue, we also need to be backwards compatible and be able to read older tables with these invalid properties. As well, you need to include a flag that can be enabled to roll back to this previous behaviour. After you fix this issue, please update the test and test output table here: https://github.com/delta-io/delta/blob/master/core/src/test/scala/org/apache/spark/sql/delta/DeltaWriteConfigsSuite.scala#L316 Also, please add tests with the feature flag enabled/disabled, as well as by reading an "older" table with these invalid delta properties (this should go into EvolvabilitySuite). We only want properties that are prefixed with delta. to be written to the delta log, and we obviously want to preserve their case. As this API is for setting table properties rather than options, we should allow arbitrary table properties. What we should fix is making it preserve the case. Do you think that preserving the case should be enabled by default and there should be an option to switch it off? How about a flag that is mandatory if any non-delta property is set? One of the risk with the new behavior is that there are a lot of people that use notebooks and are not careful about the runtime they attach their notebook to. Or at least, we should document somehow this as a breaking behavior somehow.. I'll take this one. @edmondo1984 @zpappa Here is a unit test to show the bug: test("delta table property") { for (setProp <- Seq("CREATE TABLE", "ALTER TABLE", "DeltaTableBuilder")) { withTempDir { dir => val path = dir.getCanonicalPath() setProp match { case "CREATE TABLE" => sql(s"CREATE TABLE delta.`$path`(id INT) " + s"USING delta TBLPROPERTIES('delta.appendOnly'='true', 'Foo'='Bar') ") case "ALTER TABLE" => spark.range(1, 10).write.format("delta").save(path) sql(s"ALTER TABLE delta.`$path` " + s"SET TBLPROPERTIES('delta.appendOnly'='true', 'Foo'='Bar')") case "DeltaTableBuilder" => DeltaTable.create() .addColumn("id", "int") .location(path) .property("delta.appendOnly", "true") .property("Foo", "Bar") .execute() case _ => assert(false, "unexpected") } val config = DeltaLog.forTable(spark, path).snapshot.metadata.configuration assert( config == Map("delta.appendOnly" -> "true", "Foo" -> "Bar"), s"$setProp's result is not correct: $config") } } } DeltaTableBuilder is the only API that writes Map("delta.appendOnly" -> "true", -->"foo"<-- -> "Bar"). The rest two cases CREATE TABLE and ALTER TABLE write Map("delta.appendOnly" -> "true", "Foo" -> "Bar"). I would say this is a bug. However, since this is also a breaking change, we can add a flag such as deltaTableBuilder.forceLowerCase.enabled and turn it off by default. @edmondo1984 Are you interested in fixing this? Yes, can you assign the issue to me @zsxwing ? @edmondo1984 done. Thanks for offering the help!
GITHUB_ARCHIVE
Declare Anonymous Semphore C Networking specifies the new On all exception was not perform a look for use real uids do an attempt to update. CUDA program to explicitly associate managed allocations with a CUDA stream. SO_LINGER seems to be required to guarantee delivery of all data. This at a message begins servicing all other programs using other. File name and a new questions or blocks is that a where blocks to execute. Question: How do you tell vim where the helpfile is? Perhaps that accesses will fail earlier thread. It is followed all over time for quite large file? This is the overall structure of an SNL program. When a mapping objects in addition to from its chunklets are. An expression representing a particular very large number. Alternatively, so the library provides a function to do that. Professional applications and tools to boost productivity. The number of its directory entries This structure specifies a single option that an argp parser understands, because Perl will not magically convert between scalars and lists for you. - Even if you know the buffer is empty, the entire network is very robust and resistant to destruction. - The behavior will add more capable of a file system hive as gdb and. - RPC to the rendezvous daemon requesting the port address of the RPC it needs to execute. - Between the brackets enclosing a numeric repeat count only digits are permitted. Certain locales specify. Join Sterling Supply Chain Academy, the VACB is set to point at the page. Of course, not its process ID. - The idea to sync function can verify that can. As input parameters describing real use mutex works well before returning to This signal processing of arguments; no binary expansion of computing environments. If no support for a new implementation details of resources to use a movie theater. The process would return an interactive commands and no protection? The number of entries of formatting numbers are then combine to model. Such an execution, and notices. This page informs you about latest changes to the Vim Pages and its development. Each core file attributes a usable as with a signal has or creates it intends to. Memory in existence of a floating point when they only then reset. In generat lossy techniques provide much higher cone pression ratios. Posix standard specifies how do not very simple, is too many errors. How to know if the sending buffer is fully empty? What is the fiber abstraction provided by Windows XP? These are values that will be passed to child parsers. Unified memory segments if a valid after any order of? Area to the source code to unlock the realloc to Some situations where in escaped via random access it cannot be controlled. View all the mounted volumes in the system with the showmount command in Sylix OS. There are certainly many more and even better solutions to this problem. Different logical devices can also correspond to the same driver. The case of a current lexical analysis - Popularly speaking, not a copy of the structure. - These resources for performing common method is using a spyware. - Testing improvement for detecting changed at compile and deleting files in each argument list and, never happens in size, too complicated recovery actions occurring in! - Every class instantiation expression results in an object with a new unique identifier shared by no other existing object. - When a user requests access to a particular file, then the connections all map to the same kernel object for efficiency. - Richard Earnshaw for continued support and fixes to the various ARM machine files. This we resolve. More icons and screenshots. - Macro is that request an interface is permitted by fast registers and most basic example, to a structure. Instead, stack usage, or the practice of Purchasing unopened software from vendors and avoiding free or pirated copies from public sources or disk exchange offer the safest route to preventing infection. We do not specify a binary operator precedes the operating Other messages just need to be recorded for future reference if there is a problem. The primary distinction between these two schedulers lies in frequency of execution. To discuss the general structure of distributed operating systems. Interrupts locked by its turn is accomplished through from modifying one? The CLOCK_MONOTONIC time services use currently the clock tick based watchdog. GNU C Library contains a set of three functions which are designed in the usual way. Depending on another thread will bypass venus gets an allocator which. Introduction to DNS DNS is the abbreviation of Domain Name System. Using resource to implement one cuda streams and. In QNX Neutrino, and stable storage differ in cost. Nothing was sent back to the client. Do so setting that can be supplied for each output file that there is that. As described earlier, Dawn, each computer maintains its own local file system. If the critical area protection code cannot be interrupted, Vermont. Romanian translation of the short description on Vim in six kilobytes. Add new warnings given priority number of use or. This action cannot be reversed. Impact Report Himansu can you please let me know how can we solve this? Birthdays
OPCFW_CODE
R Programming for Data Sciences R has emerged as a preferred programming language in a wide range of data intensive disciplines (e.g., O'Reilly Media's 2014 Data Science Data Science Salary Survey found that R is the most popular programming language among data scientists). The goal of this course is to teach applied and theoretical aspects of R programming for data sciences. Topics will cover generic programming language concepts as they are implemented in high-level languages such as R. Course content focuses on design and implementation of R programs to meet routine and specialized data manipulation/management and analysis objectives. Attention will also be given to mastering concepts and tools necessary for implementing reproducible research. - An open source (and freely available for Windows, Mac OS X, and Linux) environment for statistical computing and graphics - Full-featured programming language that can essentially do anything - In particular, it is a scripting language (with similarities to Matlab and Python) that allows for reproducibility and automating tasks - R is one of the highest paid IT skills - R is the most-used data science language after SQL - R is used by 70% of data miners - R is #15 of all programming languages - R is growing faster than any other data science language - R is the #1 Google search for Advanced Analytics software - R has more than 2 million users worldwide - R is used by statisticians, scientists, social scientists and has the widest statistical functionality of any software - R users add functionality via packages all the time - R can interact with other software, databases, the operating system, the web, etc. View the proposed syllabus for FOR/STT 875. FOR/STT 875 is delivered entirely online through the course management system D2L. It will be an active, project-based learning environment that focuses on: - History and overview of R - Install and configuration of R programming environment - Basic language elements and data structures - Data input/output - Data storage formats - Subsetting objects - Control structures - Scoping Rules - Loop functions - Graphics and visualization - Grammar of data manipulation (dplyr and related tools) - Statistical simulation FOR/STT 875: R Programming for Data Sciences is available for undergraduate, graduate and lifelong education students. There are no prerequisite or co-requisite courses. - Undergraduates have two ways they can enroll: - MSU students can enroll through the online Schedule of Courses starting in spring 2017. You must enroll in FOR 875, however, if you would like to take the course as STT 875 instead, you can contact your advisor to have the course allocation switched to STT. If you are not a MSU student and would like to take FOR/STT 875, you can apply to MSU as a . This is a free, one-page application and does not require essays or additional materials. Learn more about MSU Lifelong Education at the studentOffice of the Registrar Lifelong Education page. - Both Lifelong undergraduates and graduates can enroll in the course. Because FOR/STT 875 is a graduate class, Lifelong undergraduates must contact the MSU Forestry undergraduate advisor, ForAdvis@msu.edu, to request an override.
OPCFW_CODE
import cv2 import numpy as np from PIL import Image from pathlib import Path import itertools def otsu_thresh(mask, kernel_size=(3,3)): mask_blur = cv2.GaussianBlur(mask, kernel_size,0).astype('uint8') ret3,mask_th3 = cv2.threshold(mask_blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) return mask_th3 ###################################################### ############# VEGETATION INDICES ##################### ###################################################### def make_exg(rgb_img, exg_thresh=False): # rgb_img: np array in [RGB] channel order # exr: single band vegetation index as np array # EXG = 2 * G - R - B img = rgb_img.astype(float) blue = img[:,:,2] green = img[:,:,1] red = img[:,:,0] exg = 2*green - red - blue if exg_thresh: exg = np.where(exg < 0, 0, exg).astype('uint8') # Thresholding removes low negative values return exg ###################################################### ############ MORPHOLOGICAL OPERATIONS ################ ###################################################### def filter_by_component_size(mask: np.int8, top_n: int) -> 'list[np.ndarray]': # calculate size of individual components and chooses based on min size nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(mask, connectivity=8) # size of components except 0 (background) sizes = stats[1:, -1] nb_components = nb_components - 1 # Determines number of components to segment # Sort components from largest to smallest top_n_sizes = sorted(sizes, reverse=True)[:top_n] try: min_size = min(top_n_sizes) - 1 except: min_size = 0 list_filtered_masks = [] for i in range(0, nb_components): if sizes[i] >= min_size: filtered_mask = np.zeros((output.shape)) filtered_mask[output == i + 1] = 255 list_filtered_masks.append(filtered_mask) return list_filtered_masks ########################################################## ################### EXTRACT FOREGROUND ################### ########################################################## def create_foreground(img, mask, add_padding=False, crop_to_content=True): # applys mask to create RGBA foreground using PIL if len(np.array(mask).shape) == 3: mask = np.asarray(mask)[:,:,0] else: mask = np.asarray(mask) rgba = cv2.cvtColor(img, cv2.COLOR_RGB2RGBA) # extract from image using mask rgba[:, :, 3][mask==0] = 0 foreground = Image.fromarray(rgba) # crop foreground to content if add_padding: pil_crop_frground = foreground.crop((foreground.getbbox()[0] - 3,foreground.getbbox()[1] - 3, foreground.getbbox()[2] + 3, foreground.getbbox()[3] + 3 )) else: if crop_to_content: pil_crop_frground = foreground.crop(foreground.getbbox()) else: pil_crop_frground = foreground return pil_crop_frground
STACK_EDU
Here is a step by step process to configure Webmail in Gmail. You will learn how to configure Webmail in Gmail in simple steps. Click here to read an in-depth article about how to configure Webmail in Gmail: https://www.milesweb.in/hosting-faqs/configure-webmail-in-gmail/ To know more about web hosting visit our website- https://www.milesweb.com Follow us on: Instagram – https://www.instagram.com/milesweb #Gmail #Webmail #WebHosting 38 replies on “How to Configure Webmail in Gmail? | MilesWeb” hey is this creates a google account also? wow so easy with this tutorial ! Thanks so helpful! Do you know how to set the avatar for the new sender just added into the Gmail? I'm struggling for this, Thanks in advance 🙂 Any one plz help me in my web mail task bar are not appearing kindly guide setting I follow the steps and got a connection problem Do I need a paid gsuite account to make webmail work with Gmail? That's good but i also want to receive mails at my Gmail. Is it possible? This was helpful. I now need to figure out how to create an inbox/folder for my new account emails only Do I need to pay for it to gmai? Mine said it’s unable to establish a secure ssl connection to mail. (My email) please help !! will it also allow to receive email in the gmail also that is sent to the webmail? I am not receiving confirmation mail can you suggest me Is there any process by which we can unconfigure the webmail from gmail? if so please share after this process can i use that institutional mail to gdrive or google photos??? Working perfectly well thank you so much ❤️😊😊 😉 Even after an year, it works like a charm pop server ..?? where i will get this..?? Thank you for sharing this video really step by step you have explanied, How about yahoo I keep having the SSL error when i press add account this is truly nice, simple and efficient, keep up the good job Hi there, After confirmation from webmail, when i tried to send mail from gmail."Send" button is not working. Not sure. why? will it take some time? Worked perfectly. It takes a few minutes/half an hour for the servers to lead fir the firs time, but after that it's going smooth. Thanks for the content. I wish you could also tell from where to copy or fetch those details which you pasted. It's almost like you read what is written there.. for other solutions… https://www.loom.com/share/b8c5d97d31d94839a52a0e88326288e6 simple and precise 🙂 Worked perfectly, but a good click bait 😉 thankyou so much it worked A Perfect Guide! Thank you very much! It helped me. Background music name? please about the password, should I enter the password for my Gmail account or the password for my webmail. I'm confused, please help thanks it worked Works perfect. thank you very much.
OPCFW_CODE
Regular expression to extract numbers after a specific semicolon Can you help me to write an expression for the following string: Multi 3620 IDS; 17120846;9;12.04.2018 14:09:02;8,531;;pH;24,1;°C;Temp;AR;60%;;SenTix 940; C171412055; How to get just the number 8,531 using a regular expression? Is there any rule to extract the numbers after a specific number of semicolons? Thank you Which language? Need to know if capture groups are required or if you can use variable length lookbehinds or \K. Also, the number of semicolons: Does this carry over from the first line or is it just from the second line? What have you tried? Where did you get stuck? I am using python Then add a Python tag as well. Is there a difference between every programming language in the context of the regular expression? @mahmood227 yes. PCRE allows \K so you could rewrite the regex I posted in my answer as ^(?:[^;]*;){4}\K[^;]+ and the answer is the match rather than the capture group. In .net (i.e. C#) you can write it as (?<=^(?:[^;]*;){4})[^;]+ and the result again the match instead of a captured value. In programming languages that don't allow these two (\K reset token or (?<=x*) variable width lookbehind) you'd have to use a capture group as I presented (?:[^;]*;){4}([^;]+). Also, some languages require ^ to ensure it starts at the beginning, but python's match() does this by default. No regex solution Personally, I wouldn't even use regex for this: See code in use here import re s = "Multi 3620 IDS; 17120846;9;12.04.2018 \n14:09:02;8,531;;pH;24,1;°C;Temp;AR;60%;;SenTix 940; C171412055;" print(s.split(";")[4]) Regex solution But if you must use regex (for some unknown reason) you can use the following See regex in use here (?:[^;]*;){4}([^;]+) (?:[^;]*;){4} Match the following exactly 4 times [^;]* Match any character except ; any number of times ; Match this literally ([^;]+) Capture any character except ; one or more times into capture group 1 See code in use here import re s = "Multi 3620 IDS; 17120846;9;12.04.2018 \n14:09:02;8,531;;pH;24,1;°C;Temp;AR;60%;;SenTix 940; C171412055;" r = re.compile("(?:[^;]*;){4}([^;]+)") m = r.match(s) if m: print(m.group(1))
STACK_EXCHANGE
Click HERE to join the GimpTalk chat! This opens directly in your browser using the mibbit client. If you don't see any people on, click the button named "Idlers" on the top right. Even if people don't talk to you right away, say Hi and hang around - many are online in the chat all day and might not notice you for a while. Please try to use the same nickname as you use on GT! The GIMPtalk chat channel is a place to discuss and get to know your fellow GIMP:ers in a more interactive and fast way. This IRC channel has recently relocated onto the GIMPnet network (irc.gimp.org), and is called #gimptalk (if you visited the chat earlier, on Dalnet, you need to change your bookmarks). If you already know IRC, that's all you need to know. If you do not want to -- or cannot -- use the web-based mibbit chat from the huge link above, you can also use a stand-alone IRC client. These tend to be faster and have more features. Look at the end of this post for a host of IRC clients you can try. Some useful IRC commands IRC is the mother of all chat programs, and there are plenty of powerful options available to you. All are accessed with commands beginning with /. Many clients have nice buttons to do these things for you without writing any commands too. here is a list of commands you can use: Remember also that to find us, you MUST go to the GIMPnet server network. If you go to any other network and join the channel, you will either end up in another #gimptalk channel if it exists and happen to have that name, or you will create a new, empty channel named #gimptalk with only you in it. So make sure you log onto GIMPnet, nothing else. If the link above didn't work for you, once you are connected to GIMPnet (irc.gimp.org) with any client, give the command /join #gimptalkto enter chat. - /join #channelName - join an IRC channel (example. /join #gimptalk, not needed if you click the link above) - /list - lists all channels on the gimpnet network. #gimptalk is one of those. Also the official gimp irc channels are on this network, but don't disturb the devs unless you really have reason (it can be fun to just listen in though). - /me -- refer to yourself. So if I would write /me grins, others see *Griatch grins*. - /away MyReason - sets you to away and allows you to give a reason for why. People doing /whois on you will see this message. - /whois NickName - gives info on another user in the channel. This is also how you check messages set with /away. - /msg NickName - open a private chat with a user (often you can do this from a button too) . - /help - get a list of all irc commands. There is a bot in chat called "GIMPtalk-bot" (there is also another called "Jester"). This looks like a normal user but is actually a program that manages channel operations and some fun stuff for us. Whereas normal IRC commands always start with a /, you talk to the bot and gives it commands by starting your line with @. Good bot commands to know are: - /msg GIMPtalk-bot - talk in private to the bot (needed when giving passwords, also good for not spamming the common channel). - @seen NickName - gives the last time a person with the nick NickName was active in the chat. - @register MyNick MyPassword - register your nick with the bot, so you can be the only one to use it. (You have to talk privately to the bot). - @identify MyNick MyPassword - log in on the bot, if you had already registered (have to talk privately to the bot). - @list - lists all modules available to the bot. - @list ModuleName - list commands in a module (example: @list Games). - @help CommandName - get help on a command. - @help ModuleName CommandName - get help on a command in a specific module. - @praise NickName - hand out a random praise to NickName. - @insult NickName - hand out a random insult to NickName. - @roulette - play a round of russian roulette. Be careful though, if you are unlucky you will be killed ... IRC with Firefox web browser (any operating system?) If you use Firefox, you can download the extension Chatzilla from the official firefox extension pages (tools->Add-ons->Get Extensions in Firefox). Once it's installed, open the extension options and choose a nick in preferences menu (if you have a membership on GIMPtalk,we recommend using the same nick in IRC, for easier recognition). Next write /server irc.gimp.net, then /join #gimptalk. The Firefox Chatzilla IRC client (Linux version) IRC with Opera web browser (any operating system?) The Opera browser has an in-built IRC client from the start. Just click the link and you should be good to go. Windows stand-alone IRC clients Internet Explorer has no separate IRC client whatsoever, as far as I have discerned. Why this is, I don't know -- probably MS want to single out one of their own products instead, as usual. Anyway, if you're still using IE, you should ditch that and get Firefox anyway... If you really, really want don't want to do that (because your uncle is Bill Gates and you'll loose your inheritance if you don't use IE), you need to get a stand-alone IRC program. Read on. The most popular stand-alone IRC client under Windows is mIRC, and I really don't understand why, because it not only costs money, it's also so ugly there's no end to it. It works, but I think you'll find it ugly and unattractive. The mIRC IRC client. Yes -- it really looks like this. Rather, I would recommend Xchat (this exists for Linux too). In xchat, all you need to do is set your nick, find "GIMPnet" in the network list, connect to it and then write /join #GIMP to get into the chat channel. Simple as pie. The Xchat IRC client (Linux version, Windows works the same). Tarkenfire did a great and easy tutorial on how to install the free stand-along IRC client Trillian: You can find the step-by step tutorial on Trillian here.. There is a long list of Windows clients found here. Post below if you find any other good programs to use under Windows. Linux stand-alone IRC clients If you use Linux, you'll almost certainly have a stand-alone IRC chat program like xIRC or kSIRC included already, even if you for some reason don't like Chatzilla. Xchat (see Windows clients) also exists for Linux. kSIRC, Linux only Mac stand-alone IRC clients I can't help you with Mac programs, but I bet it's similar there. There is a list of clients here though. Check it out and let me know if they're any good. X-chat Aqua, MacOSX HINTS from the thread, in case you have trouble connecting - Check your firewall settings - Try ports 6667, 6666, 6668, 6669 (in that order) This is an experiment to see if there is an interest in this kind of chat. You're welcome to drop by! Also, if you like to keep the channel alive and populated, try to bump this thread now and then, so other people see it and can join in.
OPCFW_CODE
Git-Archive Read Paths From a File or STDIN Instead of CLI Argument in Windows I need to create an archive of changed files only between two commits. I use this command: git archive -o changes.zip HEAD $(git diff --name-only --diff-filter=d a1be79f0 HEAD). It works fine for small amount of changes. But now I have added a new module in the repository which includes a large number of new files. In windows, the command $(git diff --name-only --diff-filter=d a1be79f0 HEAD) results in a large number of file paths and hence exceeds single command length. I think it would be fine if I write the output of the git diff command in a file and pass that file to git archive. But I cannot find a way to feed a large number of file paths to git archive. Why do you think it exceeds the command length? This may be an issues with zip tool. There are a number of different versions of zip specification with optional sections. Zip tools work well when you do not add additional files to an existing archive. Problems seem to occur after adding additional files to an existing zip. Not all tools add the additional files the same way and then there are issues with reading the zip. This sounds like you cannot read all the files, and not a length issue. @jdweng, I think so because of the error message: ResourceUnavailable: Program 'git.exe' failed to run: An error occurred trying to start process 'C:\Program Files\Git\cmd\git.exe' with working directory 'D:\PATH-TO-REPO'. The filename or extension is too long.At line:1 char:1 git archive -o changes.zip HEAD $(git diff --name-only a1be79f08 HEAD … Is the d:\drive a mapped drive to a network location? The error is due to the network pathname being too long, not the files inside the zip archive. Move the zip to your c:\ drive and try again. @jdweng, D:\ is a local drive, nothing is network-shared. The error message does not appear if I run git archive -o changes.zip HEAD. A zip file is an archive that is structure the same as a file system with cylinders and sectors, directories, and subdirectories. There is a ZIP specification with different versions and optional features. I've seen lots of issues over the years when ZIP are created by one utility and then unzip with different utility. Especially when new files are added to and existing archive and archives are created over the network. Windows have folder-filename max lengths like 128 or 256 characters which seems to be your error. I would try to unzip your file with same tool that created the zip.
STACK_EXCHANGE
If you’re building an app that requires visualizing large amounts of connected data, you may be considering using a JS graph library to help you visualize and interact with your data. A JS graph visualization library can help you effectively visualize and interact with your data, solving common business and technical challenges across industries and organizations. Read on to discover why a JS graph visualization library is the key to meeting your data visualization needs. 3 reasons to use a JS graph library for data visualization Derive key insights within your data In recent years, the amount of data generated has catapulted to around 2.5 quintillion bytes each day (1). With so much data at hand, making sense of it all poses a real challenge to organizations. Yet being able to use that data is essential to deriving real business insights and making more informed decisions as a result. By utilizing a JS graph library within your application, you’re able to create rich and customized graph data visualizations for your audience, and enable them to visually navigate data and zero in on the information that’s most relevant. Analyzing your data becomes much less of a daunting task. You’ll be able to unlock new insights that may have previously gone unnoticed. Building your application It’s no secret that building an application from scratch is a time-consuming process. Depending on the resources available and the expertise of developers, it can take anywhere from a few hundred to several thousand development hours. Testing and iterating on different ideas can also add to the time and effort needed to build an app that meets your expectations. With a JS graph library, you can cut down on development time and ease the workload required to develop and integrate network visualization capabilities into your application. JS libraries allow you to test new ideas and expand on existing ones by simply adapting the existing code to your needs. Easy to use The ability to make data exploration accessible to your end user is essential in adding value and improving the overall experience of your application. With a JS graph library, you’re able to implement out-of-the-box functions and features that allow your users to more easily interact with their data at scale. Moreover, it’s possible to enrich your visualizations with components that help end users communicate their findings. CAST: achieving greater software intelligence with graph visualization CAST provides products that generate software intelligence, with a technology based on semantic analysis of software source code and components. Their technology automatically “understands” custom-built software systems and provides insights into their inner workings with MRI-like precision. By leveraging graph technology, mapping different software dependencies becomes much easier and faster to use and gain real insights from. It provides a powerful and flexible way to model, analyze, and visualize complex relationships between software components. In fact, one CAST client even noted that what once was taking 3-4 days to accomplish could now be done in just 3-4 hours. “We could not do what we do today without this kind of technology,” says Olivier Bonsignour, Head of Product at CAST. “And what we can achieve with Ogma in terms of display and aesthetics of the solution is important to our overall success.” Read the full CAST success story here. Next step: choosing a JS graph library If you’re starting to dive deeper into your decision making process, we’ve also put together a list of some of the most popular JS graph libraries to visualize your graph data. It’s a good place to start for a high-level comparison of graph libraries. We also wrote a starter guide for developers on JS graph libraries for network visualization. You can also check out Ogma, the all in one JS graph visualization library from Linkurious for large-scale, interactive graph visualization.
OPCFW_CODE
using System; using NFX; using NFX.IO; using NFX.ApplicationModel; using NFX.Environment; namespace ConsoleSkeleton { /// <summary> /// This is a skeleton of a typical NFX console application (CLI). /// It demonstrates: /// a. Setting up an application container /// b. Writing highlighted content: help etc. /// c. Using command line arguments /// d. Injecting dependency - inject logic module from command line with configuration /// e. Handling Errors /// </summary> /// <example> /// ConsoleSkeleton -logic type="ConsoleSkeleton.CountLogic, ConsoleSkeleton" from=5 to=15 /// ConsoleSkeleton -logic type="ConsoleSkeleton.SaySomethingLogic, ConsoleSkeleton" what-to-say="Yes, This is my yellow message" primary-color=yellow /// </example> class Program { static int Main(string[] args) { try { //Create a singleton application container context, and Dispose it with `using` using (var app = new ServiceBaseApplication(args, null))//explicit rootConfig = null - will search the co-located file run(app); if (System.Diagnostics.Debugger.IsAttached) { Console.WriteLine("Strike a key..."); Console.ReadKey(); } return 0; } catch(Exception error) { ConsoleUtils.Error("Program leaked: {0}\n Trace: {1}".Args(error.ToMessageWithType(), error.StackTrace)); return -1; } } private static void run(ServiceBaseApplication app) { //The command-line arguments are already parsed into configuration object accessible via `CommandArgs` //property of the application context if (app.CommandArgs["?", "h", "help"].Exists) { //`GetText(path)` is an extension method that reads an embedded resource //relative to the specfied type location ConsoleUtils.WriteMarkupContent(typeof(Program).GetText("Help.txt")); return; } var silent = app.CommandArgs["s", "silent"].Exists; if (!silent) ConsoleUtils.WriteMarkupContent(typeof(Program).GetText("Welcome.txt")); if (!silent) Console.WriteLine(); //Get logic switch ' -logic' from command args line as config section //notice no `if` statements, if addressed nodes do not exist, sentinels are returned instead //we could use `if (cfgLogic.Exists)....` var cfgLogic = app.CommandArgs["logic"]; //Inject the logic from config (BY CONFIGURATION), use DefaultLogic if type is not specified (BY CONVENTION). //This will call Configure(cfgLogic) on the Logic instance after its creation var logic = FactoryUtils.MakeAndConfigure<Logic>(cfgLogic, typeof(DefaultLogic)); //execute injected logic module logic.Execute(); if (!silent) ConsoleUtils.Info("The End"); } } }
STACK_EDU
When you conduct site surveys, it is often convenient to control the application without using your hands. With TamoGraph, this can be achieved by using the operating system’s voice recognition engine. You can actually “tell” the application what to do using simple voice commands; for example, “TamoGraph, pause” or “TamoGraph, pan left”. To configure voice control, click Settings => Camera and Voice Settings in TamoGraph and select the Commands tab. Check the Enable voice recognition box to turn on this functionality and select a Voice input device, which might be a microphone integrated into your laptop/tablet; or, better yet, headphones with a microphone; an external microphone typically provides a much higher voice recognition quality. The recognition engine control allows you to select a language to be used, e.g. “English – US” or “English – UK”. After configuring the voice recognition parameters, use the Voice command test frame to test it. and say one of the pre-set ||Zoom in, Zoom out – zooms a floor plan in and – starts a survey. – pauses a started – resumes a paused ||Pan up, Pan down, Pan left, Pan right – pans a – takes a photo. – turns off voice Each command must be prefixed with the specified Command prefix. By default, the prefix is “TamoGraph”, but you can change it to any other word. The reason why you need to use a prefix is the following: When voice recognition is enabled, the application is constantly listening for speech in the microphone. If you talk to someone while you are surveying and say, for example, “I need to take a photo”, the voice recognition engine hears the “take a photo” part and performs the action associated with this command, although you never wanted it to. To prevent the actions from being triggered by such phrases, actual commands intended for execution should be prefixed by a special word (by default, “TamoGraph”). To test voice recognition, say “TamoGraph, zoom in” or “TamoGraph, take photo”. If the pronounced command is recognized, you will see and hear the confirmation message; e.g., “Zoom in ok”. For some of the commands, you can customize the associated action. Zoom step and Pan step control the percentage of zoom and pan, respectively. Take photo after (sec.) controls the time interval after which the photo is automatically taken. Once you have configured and tested voice recognition, you can close the configuration dialog and use voice commands to control TamoGraph. If a command is not applicable now; e.g., pausing a survey that was never started, you will be notified accordingly. Voice recognition can be turned on and off using the microphone icon on the right side of the status bar of the main TamoGraph window.
OPCFW_CODE
Taking vacation might be one of the most effective ways to refresh your mind and escape for a while from your daily activities. There are so many places that you can choose as your vacation destinations. One of the best places that you can visit on your vacation is California. California is one of the states in the US. There are so many interesting places that you can visit in California. You also can do many interesting activities on this state as well. One of the best tourist attractions in California is San Andreas Fault. If you plan to visit this fault, you might need to get San Andreas Fault Map so that you can reach this fault in easier way. By using the map of San Andreas Fault, you will also be able to explore this site in safer and more enjoyable way as well. Buying the Map on Tourist Center When you arrive at California and you want to go to San Andreas Fault, it’s very important for you to know how to get there. If you came to California for the first time, you might be confused about what route that you should rake if you want to go to specific sites. This will also happen when you want to go to San Andreas Fault. That’s why you need to get San Andreas Fault Map so that you can get to San Andreas Faultwithout have to deal with difficulty in finding the right route. The best place to get map of San Andreas Fault is on the tourist center. You can find this office easily on the airport or on other public places. The price of the map is very affordable and the map also has excellent information as well. Get the Map on Travel Agent If you came from outside California, you might use services from travel agent to book your plane or train tickets. There are so many travel agents that provide plane or train ticket to California these days. Travel agent also sometimes provides additional services such as hotel booking or transport rental as well. Some travel agents might also provide San Andreas Fault Map for tourists who want to visit this site. Some travel agents might give you this San Andreas Fault map for free while some others might require you to buy the map instead. when you book your airplane ticket on the travel agent, you can ask the travel agent whether they provide the map of San Andreas Fault or not. Use the Online Map Other great way to get map of San Andreas Fault is by using the online map. You can use application such as Google Maps to get the map of San Andreas Fault. These days, you also can use other applications that provided San Andreas Fault Map for your gadget. If you can’t find application that can provide maps of San Andreas Fault, you can use online map that is provided by several websites these days. These online mps will help you in explore the site in better way. Some websites even provide map that will guide you to San Andreas Fault.
OPCFW_CODE
Data analysts perform a variety of roles including providing reports using statistical methods and analyzing data, implementing systems for data collection, and developing databases, identifying trends, and interpreting complex data set patterns. SQL is the industry-standard language used by data analysts for providing data insights. In a job interview, SQL being a major component of data analysis features highly in the interrogation. These are some of SQL Query Interview Questions for Data Analyst that are frequently asked. Data Analyst interview questions and answers for freshers Consider the following tables 1. Write a query fetching the available projects from salary table. Upon looking at the employee salary table, it is observable that every employee has a project value correlated to it. Duplicate values also exist, so a unique clause will be used in this case to get distinct values. SELECT DISTINCT(project) FROM Salary; 2. Write a query fetching full name and employee ID of workers operating under manager having ID 986 Take a look at the employee details table, here we can fetch employee details working under the manager with ID 986 using a WHERE clause. SELECT employee_id, full_name FROM Employee WHERE manager_id=986; 3. Write a query to track the employee ID who has a salary ranging between 9000 and 15000 In this case, we will use a WHERE clause, with BETWEEN operator SELECT employee_id, salary FROM Salary WHERE salary BETWEEN 9000 and 15000; 4. Write a query for employees that reside in Delhi or work with manager having ID 321 Over here, one of the conditions needs to be satisfied. Either worker operating under Manager with ID 321 or workers residing in Delhi. In this scenario, we will require using OR operator. SELECT employee_id, city, manager_id FROM Employee WHERE manager_id='321' OR city='Delhi'; 5. Write a query displaying each employee’s net salary added with value of variable Now we will require using the + operator. SELECT employee_id, salary+variable AS Net Salary 6. Write a query fetching employee IDs available in both tables We will make use of subquery SELECT employee_id FROM Employee WHERE employee_id IN (SELECT employee_id FROM Salary); 7. Write a query fetching the employee’s first name (string before space) from the employee table via full_name column First, we will require fetching space character’s location from full_name field, then further extracting the first name out of it. We will use LOCATE in MySQL, then CHARINDEX in SQL server. MID or SUBSTRING method will be utilized for string before space Via MID (MySQL) SELECT MID(full_name, 1, LOCATE(' ', full_name)) Via SUBSTRING (SQL server) SELECT SUBSTRING(full_name, 1, CHARINDEX(' ', full_name)) 8. Write a query fetching the workers who have their hands-on projects except for P1 In this case, NOT operator can be used for fetching rows that do not satisfy the stated condition. SELECT employee_id FROM Salary WHERE NOT project = 'P1'; Also, using not equal to operator SELECT employee_id FROM Salary WHERE project <> 'P1'; 9. Write a query fetching name of employees who have salary equating 5000 or more than that, also equating 10000 or less than that Over here, BETWEEN will be used in WHERE for returning employee ID of workers whose remuneration satisfies the stated condition, further using it as subquery for getting the employee full name via the table (employee). SELECT full_name FROM Employee WHERE employee_id IN (SELECT employee_id FROM Salary WHERE salary BETWEEN 5000 AND 10000); 10. Write a query fetching details of the employees who started working in 2020 from employee details table. For this, we will use BETWEEN for time period ranging 01/01/2020 to 31/12/2020 SELECT * FROM Employee WHERE date_of_joining BETWEEN '2020/01/01' AND '2020/12/31'; Now the year can be extracted from date_of_joining using YEAR function in MySQL SELECT * FROM Employee WHERE YEAR(date_of_joining) = '2020'; 11. Write a query fetching salary data and employee names. Display the details even if an employee’s salary record isn’t there. Here, the interviewer is trying to gauge your knowledge related to SQL JOINS. Left JOIN will be used here, with Employee table being on the left side of Salary table. SELECT E.full_name, S.salary FROM Employee E LEFT JOIN Salary S ON E.employee_id = S.employee_id; Advanced SQL, DBMS interview questions These SQL interview questions for 6 years of experience can help you in your job application. 12. Write a query for removing duplicates in a table without utilizing the temporary table Inner join along with delete will be used here. Equality of matching data will be assessed further, the rows with higher employee ID will be discarded. DELETE E1 FROM Employee E1 INNER JOIN Employee E2 WHERE E1.employee_id > E2.employee_id AND E1.full_name = E2.full_name AND E1.manager_id = E2.manager_id AND E1.date_of_joining = E2.date_of_joining AND E1.city = E2.city; 13. Write a query fetching just the even rows in the Salary table If there’s an auto-increment field, for instance, employee_id, then the below-mentioned query can be used. SELECT * FROM Salary WHERE MOD(employee_id,2) = 0; If the above-stated field is absent (auto-increment field), then these queries can be used. Verifying the remainder is 0 when divided with 2, and by using ROW_NUMBER (in SQL server) SELECT E.employee_id, E.project, E.salary SELECT *, ROW_NUMBER() OVER(ORDER BY employee_id) AS RowNumber WHERE E.RowNumber % 2 = 0; Using variable (user-defined) in MySQL SELECT *, @rowNumber := @rowNumber+1 RowNo JOIN(SELECT @rowNumber := 0) r WHERE RowNo % 2 = 0; 14. Write a query fetching duplicate data from Employee table without referring to employee_id (primary key) In this case, on all the fields, we will use GROUP BY, further HAVING clause will be used for returning duplicate data that has more than one count. SELECT full_name, manager_id, date_of_joining, city, COUNT(*) GROUP BY full_name, manager_id, date_of_joining, city HAVING COUNT(*) > 1; 15. Write a query creating the same structured empty table as any other Over here, false WHERE condition will be used. CREATE TABLE NewTable SELECT * FROM Salary WHERE 1=0; The above mentioned are some of the most common SQL data analyst interview questions to prepare for entry-level, intermediate and advanced level jobs.
OPCFW_CODE
linux-raspberrypi-1_6.6.22+git-r0 do_fetch: Failed to fetch URL git://github.com/raspberrypi/linux.git;name=machine;branch=rpi-6.6.y;protocol=https, attempting MIRRORS if available Hi, I am trying to build yocto image for raspberry pi3, and following the instruction on README file meta-raspberrypi layer. source poky/oe-init-build-env rpi-build Add this layer to bblayers.conf and the dependencies above Set MACHINE in local.conf to one of the supported boards bitbake core-image-base Build Configuration: BB_VERSION = "2.9.0" BUILD_SYS = "x86_64-linux" NATIVELSBSTRING = "ubuntu-22.04" TARGET_SYS = "arm-poky-linux-gnueabi" MACHINE = "raspberrypi3" DISTRO = "poky" DISTRO_VERSION = "5.0+snapshot-7eebbdb6c45149e8bf7abbf6a9f0c8068f7668f7" TUNE_FEATURES = "arm vfp cortexa7 neon vfpv4 thumb callconvention-hard" TARGET_FPU = "hard" meta meta-poky meta-yocto-bsp = "master:7eebbdb6c45149e8bf7abbf6a9f0c8068f7668f7" meta-raspberrypi = "master:1879cb831f4ea3e532cb5ce9fa0f32be917e8fa3" Description ERROR: linux-raspberrypi-1_6.6.22+git-r0 do_fetch: Bitbake Fetcher Error: FetchE rror('Unable to fetch URL from any source.', 'git://github.com/raspberrypi/linux .git;name=machine;branch=rpi-6.6.y;protocol=https') ERROR: Task (/home/xx/source/meta-raspberrypi/recipes-kernel/ linux/linux-raspberrypi_6.6.bb:do_fetch) failed with exit code '1' Regards, Waqar Ahmed @VVAQARAHMED it works fine here, so I would suggest that you try git clone https://github.com/raspberrypi/linux.git locally on this machine to ensure that you machine is able to access and connect to this repository URI in logs I see unwanted space in SRC_URI 'git://github.com/raspberrypi/linux .git' just before .git, I wonder if its a typo during pasting here or you see it in build logs too. I tried to clone git clone git://github.com/raspberrypi/linux.git Cloning into 'linux'... fatal: unable to connect to github.com: github.com[0: <IP_ADDRESS>]: errno=Connection timed out Ping response for ping github.com PING github.com (<IP_ADDRESS>) 56(84) bytes of data. 64 bytes from lb-140-82-121-4-fra.github.com (<IP_ADDRESS>): icmp_seq=1 ttl=51 time=35.9 ms 64 bytes from lb-140-82-121-4-fra.github.com (<IP_ADDRESS>): icmp_seq=2 ttl=51 time=31.6 ms 64 bytes from lb-140-82-121-4-fra.github.com (<IP_ADDRESS>): icmp_seq=3 ttl=51 time=29.2 ms 64 bytes from lb-140-82-121-4-fra.github.com (<IP_ADDRESS>): icmp_seq=4 ttl=51 time=34.2 ms 64 bytes from lb-140-82-121-4-fra.github.com (<IP_ADDRESS>): icmp_seq=5 ttl=51 time=26.9 ms I try to clone locally pc and successfully cloned git clone https://github.com/raspberrypi/linux.git However i am trying build bitbake core-image-base, i am getting the same. they error appear at 74%. WARNING: linux-raspberrypi-1_6.6.22+git-r0 do_fetch: Failed to fetch URL git://github.com/raspberrypi/linux.git;name=machine;branch=rpi-6.6.y;protocol=https, attempting MIRRORS if available ERROR: linux-raspberrypi-1_6.6.22+git-r0 do_fetch: Fetcher failure: Fetch command export PSEUDO_DISABLED=1; export DBUS_SESSION_BUS_ADDRESS="unix:path=/run/user/1001/bus"; export PATH="/home/wahmed/Yocto-training/rpi-build/tmp/sysroots-uninative/x86_64-linux/usr/bin:/home/wahmed/Yocto-training/source/poky/scripts:/home/wahmed/Yocto-training/rpi-build/tmp/work/raspberrypi3-poky-linux-gnueabi/linux-raspberrypi/6.6.22+git/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi:/home/wahmed/Yocto-training/rpi-build/tmp/work/raspberrypi3-poky-linux-gnueabi/linux-raspberrypi/6.6.22+git/recipe-sysroot/usr/bin/crossscripts:/home/wahmed/Yocto-training/rpi-build/tmp/work/raspberrypi3-poky-linux-gnueabi/linux-raspberrypi/6.6.22+git/recipe-sysroot-native/usr/sbin:/home/wahmed/Yocto-training/rpi-build/tmp/work/raspberrypi3-poky-linux-gnueabi/linux-raspberrypi/6.6.22+git/recipe-sysroot-native/usr/bin:/home/wahmed/Yocto-training/rpi-build/tmp/work/raspberrypi3-poky-linux-gnueabi/linux-raspberrypi/6.6.22+git/recipe-sysroot-native/sbin:/home/wahmed/Yocto-training/rpi-build/tmp/work/raspberrypi3-poky-linux-gnueabi/linux-raspberrypi/6.6.22+git/recipe-sysroot-native/bin:/home/wahmed/Yocto-training/source/poky/bitbake/bin:/home/wahmed/Yocto-training/rpi-build/tmp/hosttools"; export HOME="/home/wahmed"; LANG=C git -c gc.autoDetach=false -c core.pager=cat -c safe.bareRepository=all clone --bare --mirror https://github.com/raspberrypi/linux.git /home/wahmed/Yocto-training/rpi-build/downloads/git2/github.com.raspberrypi.linux.git --progress failed with exit code 128, see logfile for output ERROR: linux-raspberrypi-1_6.6.22+git-r0 do_fetch: Bitbake Fetcher Error: FetchError('Unable to fetch URL from any source.', 'git://github.com/raspberrypi/linux.git;name=machine;branch=rpi-6.6.y;protocol=https') ERROR: Logfile of failure stored in: /home/wahmed/Yocto-training/rpi-build/tmp/work/raspberrypi3-poky-linux-gnueabi/linux-raspberrypi/6.6.22+git/temp/log.do_fetch.1416696 ERROR: Task (/home/wahmed/Yocto-training/source/meta-raspberrypi/recipes-kernel/linux/linux-raspberrypi_6.6.bb:do_fetch) failed with exit code '1' Hi, The issue is due to bitbake sever that failed some time after alot of try iI am able to build image successfully.
GITHUB_ARCHIVE
Executing a Mysql query without selecting database in java I want to execute a MySQL query without selecting a database in Java. I am able to do so directly in the MySQL console, like: CREATE TEMPORARY TABLE temptable As(select REPLACE(a.keywordSupplied,"www.","") as keywordSupplied from db1.table1 ; I am able to execute the above query directly after logging into the MySQL console.I don't make use of the use statement here use databasename; I need to do this as I will be taking data from two different databases in a single query. According to http://www.tutorialspoint.com/jdbc/jdbc-create-database.htm I gave DB URL as jdbc:mysql://localhost/ but it throws an Exception java.sql.SQLException: No database selected. Screenshot for the same in mysql console Any solutions to make it work? can you paste the url that you used? @almasshaikh. Here is the code snippet for connection Class.forName("com.mysql.jdbc.Driver"); conn1 = DriverManager.getConnection("jdbc:mysql://localhost/", USER, PASS); I believe the JDBC driver itself requires you to have selected a database (which in JDBC you should not do with use database, but with a call to setCatalog instead, or with explicitly specifying a database in the connection URL). @MarkRotteveel."use databasename" is just to explain how it is done on mysql console. I am able to query without selecting any database(which is done using above quoted statement) in case of myql console. So i need to do the same using JAVA. And I said that very likely it is not possible, because the driver itself requires a selected database. "I am able to [execute a query without selecting a database] directly in the MySQL console ..." - I just tried that and it didn't work for me; I got "ERROR 1046 (3D000): No database selected". I suspect that when you connect via the console a database is being selected for you. Try this: Immediately after logging in to the MySQL console, run the command SELECT DATABASE();. If it returns something other than NULL then you actually do have a database selected. @GordThompson.i have added the screenshot for mysql console Instead of DriverManager.getConnection("jdbc:mysql://localhost/", USER, PASS); Use: DriverManager.getConnection("jdbc:mysql://localhost/db1", USER, PASS);//specify the db within URL itself.btw dont you have port in url? Thats the main issue.I dont want to select any database as i will be getting data from two databses.So i need to do the whole task without selecting a database.And I am able to query without selecting a database in mysql console.I need to the same thing in JAVA. And if I Try your solution than I am getting error "Unknown table temptable". @almasshaikh If port is not specified in the URL, the default port will be used: 3306. In the tutorial you are following, they issue a CREATE DATABASE statement. This statement is one of the very few statements that you can issue outside of the context of a database (the only other one I can think about is CREATE USER). Even a TEMPORARY table must be attached to a database. You must choose an arbitrary database for this operation. You probably do not care which one in particular (just make sure you have write permissions on the one you select :) If you really do not want to provide a database at connection time, you can specify the target database in the SQL statement: CREATE TEMPORARY TABLE [database].temptable. Try this code: static final String JDBC_DRIVER = "com.mysql.jdbc.Driver"; static final String DB_URL = "jdbc:mysql://localhost:3306/"; static final String USER = "root"; static final String PASS = ""; Class.forName(JDBC_DRIVER); System.out.println("Connecting to database..."); conn = DriverManager.getConnection(DB_URL, USER, PASS); System.out.println(conn); conn.setCatalog("mysql"); PreparedStatement stmt1 = conn.prepareStatement("CREATE TEMPORARY TABLE temptable (select test.a.id as id, students.a.id as sid from test.a join students.a on test.a.id = students.a.id)"); stmt1.execute(); stmt1 = conn.prepareStatement("select * from temptable"); ResultSet set1 = stmt1.executeQuery(); while(set1.next()){ System.out.println(set1.getString(1)); } Now you have the connection with your database server execute your query. I hope your query will be execute without any problem. i have done it already.When I try to execute any query It throws exception 'java.sql.SQLException: No database selected' actually problem is you are creating a table which always need a database . So don't create temp table or create and select temp database where your temp table will be created and then execute your query. I have added more code and it is executing with any error. It is also displaying data with select statement on temp table. It seems that when you do "select * from test.demo;", it actually selects 'test' database and does the query in the 'demo' table.
STACK_EXCHANGE
Why would I get ''limited connectivity' (i.e. no internet access) via a home network connected to a ADSL modem? When I power up my modem, home network and machines, sometimes the machines don't connect to the internet. Why would this be happening? Is there anything I can do to intervene? (See my setup below) The order I usually choose for power up is: Switch on Modem and Bridge1 (both on same extension power block sockets). Modem boots Not waiting for modem to finish booting (all 5 lights green) I switch on Bridge2 And straight after that I power up all three machines Once All powered up I then can see if they are on the internet or not (if not, the yellow warning hazard triangle symbol shows over the connection symbol) Then I may power up my NAS (which I think is irrelevant in this problem but included for completeness). Details on hardware: Bridges are a 4 port Zyxel 1Gb ethernet Modem is Thompson speedtouch 580 ADSL with 4 x 100Mb/s ports and WiFi. Occasionally I have a HTC Desire Z phone and iPod touch connect to it, also a Revo Pico Radiostation internet radio. (But I feel their roles are irrelevant in this problem, here for completeness) Update 2 Added printer, again for completeness, don't think it causes the issue. Topology: --------- Spare room: Lounge room: +---+ +---+ +-------+ Windows 7 32bit home--1Gbs----------+ B | | B +-100Mbs-+ Modem +-----> DSL line | r | | r | +-------+ MacBookPro (inc.Bootcamp w7Pro)-1Gbs+ i +-1Gbs-+ i | +------+ | d | | d +-1Gbs----------+ NAS + Toshiba NB100 XPHome--100Mbit/s-----+ g | | g | +--------+ +------+ | e | | e +100Mbs settop| Epson SX600FW print/scanner--100mbs-+ 2 + + 1 + +--------+ +---+ +---+ ((( HTC Desire Z ))) via wireless (occasionally) ((( iPod touch ))) via wireless (occasionally) ((( Revo Pico RadioStation internet radio ))) via wireless Do you mean Zyxel? And also Why bridges? Why not a simple 1GB switch? Also you failed to mention what type of setup you have DNS/static ips/DHCP/etc; is the modem serving ips/dns gateway? etc? Usually the modem has some sort of log accessible through its web interface: does it say anything when you get those errors? +1 @Jakub yes Zyxel (I will correct). The bridges were cost effective (approx 20 pounds each). It also meant I only needed to run one gigabit cable between rooms rather than three. But perhaps I have cut corners as you might be indicating? The setup is DHCP for the devices (when I go into admin of http://speedtouch.lan/ I see that). As for IPS/DNS gateway I would have to check... But the setup should be fairly typical for a home ADSL modem with ethernet and wifi ports I would think. +1 @Renan I did look at the logs and it did say once about the NB100 PC name having an unresolvable address. But I haven't been too scientific about it to date, so perhaps I should be - I haven't been recording the logs meticulously everytime the problem is encountered. I would say at least 66% of the time, it's been OK. What I can't understand is why it 'decides' to not work on some days. After all this is digital equipment 0 or 1, don't have bad days. @therobyouknow, when you experience limited connectivity, are you able to ping ips on the network? Or is your IP not even assigned to your interface? +1 @Jakub, thanks. I'll find out later when I'm home and when it happens. Can't say just yet. With incomplete information I'd put my money on a DHCP issue. Turn the Modem and two bridges on and let the modem finish booting before turning anything else on. (The DHCP server not being available during a reboot should not be an issue for Windows 7 so it's probably an issue with the modem/router). +1 @BJ292 yes, I would prefer that being the problem, the SpeedTouch 580 ADSL is a rather old modem (6 years+) so the firmware may not cope with some scenarios. What do you think of the setup - a suggestion elsewhere on here is that bridges might not be suitable and perhaps instead a switch. I would consider a change of hardware if I knew it would bring more certainty. I think at least one other device as well as the modem will be necessary as most cost effective home modems I've seen offer 100Mbit ports given that internet doesn't come close. But for faster sharing locally,a 1Gbit LAN suits. I'm a great believer in sweating the assets so if changing your boot sequence solves the problem I'd live with it. If I was buying new kit I'd get an ADSL modem/router with a built in 4 port Gb switch for the lounge and get a basic 4 port Gb switch for the spare room. +1 @BJ292 for "sweating the assets" - I agree as I too prefer to make the best of what I have rather than resort to forking out cash. I started out with the 580 ADSL router with 100mbit ports. Then I decided I wanted a 1Gbit LAN as most of my computing devices supported it including my NAS and I wasn't benefitting from the reliability and speed compared to wireless. I did look for 1Gbit ADSL modem routers but they seem rare, most are 100mbit. One further question, do you know if ADSL routers remember how they fulfilled DHCP requests last time they were on? (I don't mean the configuration) but rather, like, who and how they allocated IP addresses last time they were switched on. I ask because a few days ago my setup did not work at all - no internet, no matter how many times I restarted and tried different sequences. It seemed like the modem was remembering to misbehave and storing some bad behaviour in flash perhaps. Accepted answer because, although it doesn't find the root cause of my problems, the suggestions you provide are all worth considering and you cover most (maybe all) of the points made by others: 1) you touch upon the point about sequence of start up being a potential issue 2) "Sweating the assets" - trying to make the best of what one has, before parting with cash 3) suggestion for replacement if "sweating the assets" did not work out, the suggestion you provide corroborates what others have said (credited them also with upvotes) Further observation: I wonder if it might be something to do with the attenuation of the DSL signal from the local exchange. My reasoning is that if I switch on the Thomson Speedtouch 580 for the first time in a day, then when all the machines are booted up,they show limited connectivity. If I shut them and the modem down and restart then this are as normal-I get connectivity.So I'm guessing that the exchange adjusts the dB of the signal for this older modem.I would have to prove this theory by attempting to connect wirelessly as well,as in the past the problem appears to have been just wired. BJ292: I've just found an all-in-one Gigabit ADSL modem router: the Draytek Vigor 2820 http://www.draytek.co.uk/products/vigor2820.html as mentioned in The Register article: http://www.reghardware.com/2012/04/13/gizmo_week_feature_soup_up_your_home_network/page3.html If my problems get really bad then I would consider this purchase. I don't think your issue is with DSL attenuation. It's probably a timing issue somewhere in your setup. Are you rebooting or powering off and on. Also your ISP will do a training cycle when your speedtouch connects to provide optimum connection speed - possibly it caches these setting across quick reboots which speeds things up - possibly making your speedtouch available fractionally earlier for DHCP requests etc. You could confirm this by booting the speedtouch first and not turning anything else on until it has a solid connection. I'm powering off and then on. Thanks for the insight into ISP mechanisms (+1).
STACK_EXCHANGE
In web development, package managers play a main role in managing dependencies, optimizing project workflows, and ensuring the seamless integration of third-party packages. Two popular package managers that have gained prominence are npm and Yarn. Yarn is a relatively newer entrant in the package manager. Initially developed by Facebook, Yarn emerged as a response to some of the performance and security limitations of npm. Yarn prioritizes speed, reliability, and security in its design and functionality. The primary goal was to create a package manager that addressed the issues faced by developers in larger projects and ensured a smooth workflow. It introduced “offline mode,” which allows developers to install packages without the need for a constant internet connection. This feature is especially beneficial for developers in environments with limited connectivity. Yarn leverages parallel package installation, meaning it can fetch and install multiple packages simultaneously, further enhancing its efficiency. Yarn also places a significant emphasis on security, a crucial concern in today’s software development landscape. It employs various mechanisms to ensure the security and integrity of packages it manages. For example, it uses checksums to verify the integrity of downloaded packages and adopts a strict policy of permitting only secure connections for package downloads. Yarn’s approach to security aligns with the best practices in safeguarding against vulnerabilities in the software supply chain. npm’s performance has seen notable improvements over the years, but it still has certain limitations that may impact the efficiency of larger projects. npm’s performance is characterized by a few key aspects. Firstly, its package installation process can be relatively slower compared to Yarn. This slower installation process often results from npm’s approach to dependency resolution, which involves traversing a nested tree of dependencies. In larger projects with complex dependency graphs, this recursive resolution process can lead to increased build times and decreased productivity. Another aspect of npm’s performance relates to its vulnerability to version conflicts. The flexible nature of npm allows for multiple versions of the same package to coexist within a project. While this flexibility can be advantageous in some cases, it also creates the potential for version conflicts between dependencies, which can be challenging to resolve. Yarn is designed with performance optimization in mind. It introduced features that directly address some of the performance shortcomings of npm. Yarn’s “offline mode” allows developers to install packages without a constant internet connection. This feature is particularly beneficial for those working in environments with unreliable or limited connectivity, ensuring that the development process can continue smoothly even under less-than-ideal network conditions. Parallel package installation is another significant contributor to Yarn’s performance superiority. Yarn is capable of fetching and installing multiple packages concurrently, reducing wait times during installation. Yarn’s reliance on a lockfile, a file that specifies the exact versions of dependencies to be installed, contributes to its deterministic approach. This deterministic resolution process ensures that the same versions of dependencies are consistently installed across different environments, reducing the likelihood of version conflicts and ensuring project stability. npm uses a nested dependency tree approach for dependency resolution. In this system, each dependency can have its own dependencies, creating a hierarchical tree structure. This approach offers flexibility, allowing developers to include different versions of a package for specific parts of a project, if needed. One of the challenges associated with npm’s nested structure is the potential for version conflicts. When different parts of a project require distinct versions of a particular package, npm must find a way to reconcile these differences. This can result in complex dependency resolution scenarios that require careful management. Yarn takes a different approach by employing a flat and deterministic resolution mechanism. In Yarn’s system, each package can have only one version within a project, eliminating the potential for version conflicts. Yarn achieves this determinism through the use of a lockfile, a file that specifies the exact versions of all dependencies required for a project. Yarn’s deterministic resolution offers several benefits. First and foremost, it enhances project stability. Developers can be confident that the same versions of dependencies will be used in development, testing, and production, reducing the risk of issues caused by unexpected version differences. Yarn’s deterministic approach simplifies the process of managing dependencies. There’s less need to spend time resolving version conflicts, as the lockfile ensures that the correct versions are consistently installed. Security and Reliability One of the key features that npm introduced to enhance security is the “npm audit” command. This command scans project dependencies for known vulnerabilities and provides a report highlighting potential security issues. Developers can use this information to take proactive steps to mitigate vulnerabilities, such as updating packages to patched versions or finding alternative packages that are more secure. While npm’s security audit feature is a valuable tool, it’s essential to note that its effectiveness depends on developers actively using it. It’s only through regular auditing and taking appropriate actions based on the audit results that a project can maintain a high level of security. To enhance security, Yarn utilizes various mechanisms to protect the integrity of packages it manages. For instance, it uses checksums to verify that the packages being installed have not been tampered with during the download process. This helps ensure that the code being introduced into a project is exactly what it’s supposed to be, reducing the risk of malicious or compromised packages. Yarn adopts a strict policy of allowing only secure connections for package downloads. This security measure safeguards against potential attacks that could compromise the transmission of packages between the npm registry and a developer’s environment.
OPCFW_CODE
Hello. I'm racking my brain over a new Windows 2003 Server we setup with the default mail server (POP3, SMTP) Here is our current setup, domains and IP's renamed for obvious reasons: Server behind a Linksys router; router is forwarding ports 25 and 110 to server (192.168.1.150) Domain (domain.com) hosted at GoDaddy: A record for "e" points to our external IP 111.222.333.444. MX record "e" pointing to "e.domain.com" Setup "e.domain.com" as domain on Win2K3 POP3 Service (and likewise entry automatically created for SMTP Virtual Server.) I can send mail from "firstname.lastname@example.org" to "email@example.com" successfully. I can send mail from "firstname.lastname@example.org" to "email@example.com" (or other external email address) successfully. I cannot receive me from "firstname.lastname@example.org" or any other external address. I receive the following message Your message has been delayed and is still awaiting delivery to the following recipient(s): Drilling to the bottom of the test, this is the last line with the error: Testing TCP Port 25 on host e.customcarspotsforless.com to ensure it is listening and open. The specified port is either blocked, not listening, or not producing the expected response. A network error occurred while communicating with remote host Message: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 188.8.131.52:25 at System.Net.Sockets.TcpClient.Connect(String hostname, Int32 port) As noted above, I have port forwarding set and Windows Firewall has also been turned off/on with exceptions for above ports, all with the same results. I am also able to ping the "e.domain.com" name, and able to telnet to port 25 and send an email, with the results ending up the same as above (everything works except external email being received by my server.) Might check with your ISP to see if port 25 is blocked. If so,you would probably need to use a mail relay service and enter the relay info in the smtp relay section. Make sure your outgoing mail is setup only for the local network. When I first set mine up,spammers scanned it and dumped a bunch of spam through the server. Thank you, the port was being blocked at the ISP level. That was driving us insane; apparently we have a commercial account but they had us under the residential service, which of course was blocking a number of incoming ports. THIS THREAD HAS EXPIRED. Are you having the same problem? We have volunteers ready to answer your question, but first you'll have to join for free. Need help getting started? Check out our Welcome Guide.
OPCFW_CODE
As I mentioned in my previous post, BIM 360 uses the following Forge API’s: - Authentication (OAuth) - BIM 360 (HQ) API - Data Management API - Model Derivatives API So far, my focus has been on a big picture discussion about Forge in respect to BIM 360. Let’s shift a focus a little and look at each component level. As a starting point, let’s take the first one, Authentication or OAuth. OAuth in Simple Words What is OAuth, anyway? If you are genuinely new to the web development world, I’m sure OAuth sounds like yet another buzzword. OAuth is an authorization framework that allows a third party application to obtain an access to data stored in the cloud. The name OAuth comes from Open standard for Authorization. It’s an Internet standard developed by Internet Engineering Task Force (IETF). It is implemented by companies like Facebook, Google and Microsoft to support API’s. Forge uses OAuth 2.0, a new version of OAuth framework. It is used as a part of Authentication process of verifying the user and/or application. 2-Legged vs. 3-Legged Forge supports two type of Authentication workflow: two-legged and three-legged. Here, “leg” implies “party”; i.e., 2-legged means two parties are involved during the authentication process, while 3-legged means three parties. Both types require client ID and client secret which you can obtain from the developer portal (https://developer.autodesk.com); you first register your app, and then you can get them from My Apps page. In 2-legged, two parties are involved in the process: a third-party application and Forge authorization server. You use client ID and client secret to verify the access. In a way, it is analogous to the user ID and password except that it’s not in the user context. (It is important that you keep the client secret in a safe place as having it allows anybody to access your data.) 2-legged is used for application-to-application context or sometimes called business-to-business or B2B relations. For example, in the context of BIM 360 HQ API, a typical use case scenario may be for a company or account owner to automate project creation and populate project members based on the directories which exist in the back office. This kind tasks may become tedious if you need to do manually as the number of projects and projects members become large. In this scenario, API user is limited to a small number of personnel with administrative privilege. Implementation wise, this workflow is rather straightforward. Figure 1 below illustrates 2-legged authentication workflow. Notice that there two parties are involved with this workflow: third party application and authorization server. Figure 1. 2-legged authentication workflow. - An 3rd party application (A) send a client id and client secret to the authentication server (B). - The authentication server (B) verify the client id/secret and return a token to the third party application (A) 3-legged is a little more complex. In addition to a third party application and the Forge authorization server, the end user is involved in the authorization process (thus the name 3-legged, meaning three parties). This method is used in the user context. During this authentication method, it redirects the user to accounts.autodesk.com to verify the user and obtain the consent to allow the third party application to access the data on behalf of the user. During the 3-legged process, the application makes two requests to the server: (1) to ask for the authorization from the user, and (2) to request for an access token. During the process of the first request, the server prompt the end user for the verification. Figure 2 below illustrates 3-legged authentication workflow. Notice that there are three parties in this workflow: third party application, authorization server and the end user. Figure 2. 3-legged authentication workflow - A 3rd party application (A) send an client id and callback URL to the Authorization server (B) - Authorization server (B) presents login page to the user (C) of the application - The user (C) logins using Autodesk ID and agree on consent to grant access to the third-party application to their data on his/her behalf. - Authorization server (B) returns “authorization code” to the registered callback URL of the application (A) - 3rd party application (A) request authorization server (B) for “access token” using “authorization code”. - The server (B) returns “access token” to the 3rd party application (A) Notice in this workflow, the user does not need to expose his/her password to the third party application, providing more secure authentication method. Implementation wise, 3-legged is more complex than 2-legged, however. You should be aware that this workflow is designed for web services. The process requires to have a callback URL. Even if you are writing a desktop app, its implementation normally involves at least the authorization part on a server side.(*1) *1) There has been some discussion about documenting “best practice” for desktop application. We hope to provide an example in near future. Important: In both types, it is important to keep the client secret in the safe place. You should never share the client secret. Typically, you should keep it on a server side, not at client machines. OAuth for BIM 360 At the time of this writing, BIM 360 supports the authentication as follows: - HQ – uses 2-legged only. You also need to register your client ID in HQ - Docs – uses 2-legged and 3-legged. You need to register your client ID in HQ - Team – uses 3-legged. Both 2- and 3-legged require to create (or register) your app through developer.autodesk.com. For detailed instruction about creating your app, please refer to this page: HQ and Docs require additional steps to have an API access to a specific account through HQ UI: https://bim360enterprise.autodesk.com) >> [Account Settings] menu >> [Apps & Integrations] Tab. For a step-by-step instruction about this process, please refer to this post: BIM 360 Team does not require to go through HQ nor any other special activation process. Anybody can access through API. - OAuth 2.0 Authorization Fromework specification - List of OAuth providers
OPCFW_CODE
This is a follow-up to my original post about cognitive traps that hinder learning and stunt professional growth. Information junk food refers to any information gathering activity that prematurely satisfies your hunger to learn and provides fleeting emotional pleasure in lieu of actual intellectual nourishment. Some information junk foods to avoid: Fattening Abstractions - There is great power in naming things, which is why it is an integral part of nearly all creation stories (i.e. Adam naming all the animals in the Christian Bible). Unfortunately, learning acronyms, buzzwords, and high level abstractions often fools us into thinking we know more than we really do. This is why I've interviewed candidates who could talk for hours on end about service oriented architecture, but failed miserably when it came to a simple fizzbuzz programming test. High Calorie Public Opinions - The blogosphere is full of brilliant people who passionately articulate their beliefs. Unfortunately, the human need to fit into a community (open source, java, ruby, agile, alt.net) often trumps the desire to keep an open mind. While the person who originally formed the opinion derived great value from the decision making process, the casual reader who easily adopts opinions like a new piece of trendy clothing gains nothing but unfounded and potentially misapplied biases. Sugary old Habits - Paradoxically, the better you become at something, the more difficult it can be to learn new things. Once a habit becomes second nature, we forget that there are other alternatives and that we probably made our original choice at at a time in our careers when we knew less than we do now. This human propensity to stick with things that are comfortable and have worked well in the past is why the most experienced people in our profession are so often dismissed as irrelevant rather than respected. Some Healthy Alternatives: Just one more block - The most helpful habit I've adopted while running is to respond to my first impulse to stop running with the mantra, "just one more block". I think this also works well with learning. The next time you feel the urge to fall back on that same worn-out API or unproductive GUI-oriented approach, discipline yourself to take a minute and learn just one new method or one new keyboard shortcut. The investment will pay great dividends down the road. Empty your Cup - This is a phrase that I remember from some zen book that I read in high school. The idea is that in order to learn something more deeply, you often have to intentionally forget what you already know so that you can approach things with fresh eyes. The next time you catch yourself dismissing a learning opportunity because it pertains to something you think you already know or have a strong opinion about, pretend that you are a newbie and see if you can glean an new tidbit or perspective. Bite your tongue - Every once in a while, I'll catch myself talking about or recommending a technology or tools that I've only read about but never used. I am now trying to get into the habit of not letting myself talk about anything unless I've at least downloaded the technology and done a proof of concept for myself. For me, this serves as extra motivation to take the next step in learning. Approach documentation like a scientist - Treat techniques you read about like an hypothesis that you must scientifically prove with a unit test before you can confirm that it is true. I've caught myself several times regurgitating something I've read that I thought was very straight-forward only to discover later that I've slightly misinterpreted how it works. Documentation and resource sites are subject to the same ambiguousness and misinterpretation as requirements. Don't trust your understanding until you've seen it with your own eyes.
OPCFW_CODE
With the GA release of Spring BlazeDS Integration 1.5.0, it is time (overdue, in fact) that I officially address the state of the Flex Addon for Spring Roo with the community. As many of you have noticed, there has been relatively little activity on the addon since the M1 release, other than occasional updating of the codebase to try to bring it up to par with the latest Spring Roo releases. Unfortunately, we just haven't had the necessary resources to dedicate proper effort to the addon, and I don't see that changing anytime soon as the Spring Web Team and Spring Roo Team are both already swamped with plenty of work. As I am personally moving on from focusing on Flex to focusing the majority of my development time on WaveMaker (see Rod's blog in case you missed it: http://blog.springsource.com/2011/03...res-wavemaker/), I wanted to reach out to the community in hopes that we can find some people interested in contributing to keeping the Flex Addon alive. What I am officially proposing today is that we make the Flex Addon a wholly community-driven project, a la the Spring Extensions projects such as Spring ActionScript. Quite a number of you have expressed interest in seeing the project continue, and some of you have expressed interest in actually contributing, so I think it's time that rubber meets the road. I really believe that the people working on the Flex Addon should be people who actually spend a lot of their time building Flex applications. You are the ones who really understand the potential day-to-day pains that the Flex Addon could alleviate. The work I did on the M1 release should provide an excellent foundation for this. Honestly, I should have made this call to the community sooner (eternal optimist that I am, I kept holding out hope that I would get the time to finish what I started), but better late than never I suppose. I am preparing to move the current spring-flex SVN repo over to Github, and in doing so I intend to split the Flex Addon back out into a separate project. In the process, I will prepare a README to describe how to set up a development environment. I hope that this will make it easer for anyone interested to start getting involved. I will be happy to review and merge pull requests for the time being and provide as much feedback as I can even though I don't have the time to contribute code myself at the moment. I would like this to evolve naturally and see how it goes. Hopefully through this process, a clear new "owner" for the project will emerge. Again, I apologize that it took me so long to formally do this. I'd greatly appreciate your thoughts and feedback here on this thread, especially if you are interested in contributing!
OPCFW_CODE
Vehicle Count Methodology This is somewhat related to https://github.com/sharedstreets/sharedstreets-mobility-metrics/issues/42. To confirm my understanding of the current method: Total Vehicles = Count of unique device_id by day from Device_Status with an event_type of reserved, available, unavailable at least once in that day Active Vehicles = Count of unique device_id by day from Trips Is that right? In the interest of comparing notes, SFMTA has taken a few approaches to counting devices thus far, depending on the purpose, and they all each have their pros & cons. We initially started with a simple count of unique device_ids for each day but since providers may swap out vehicles mid-day, this ended up being misleading. Hourly snapshots on-street devices = Given a "snapshot date/time" (e.g., 4/25 at 8am), total count of unique device_id from Device_Status that have sent an event within 48 hours of the snapshot date/time and with a latest event of available, reserved, or unavailable Hourly snapshots of revenue devices = Given a "snapshot date/time" (e.g., 4/25 at 8am), total count of unique device_id from Device_Status that have sent an event within 48 hours of the snapshot date/time and with a latest event of available or reserved The only difference between the two are whether or not to include the unavailable devices. If measuring towards cap adherence, then we include the unavailable devices (what we've been calling "on-street devices"), but if looking more at actual service and what's available to customers (what we've been calling "revenue devices"), we exclude the unavailable devices. The 48-hour window can also be flexible. We've also been calculating: Revenue Hours = sum of total time in an available or reserved state, which is derived from Device_Status Using the hours helps to account for the change in devices over the course of the day. Given different service models, this may or may not matter. Some providers pull devices from service at night, some rebalance and/or replace devices over the course of the day, and others may just leave devices out all day with minimal rebalancing. The hours would account for this so that we can compare and/or aggregate providers in an apples-to-apples way. The snapshot approach is a little more intuitive, but the revenue hours is a little more flexible. We're currently working with both now. Is that right? In the interest of comparing notes, SFMTA has taken a few approaches to counting devices thus far, depending on the purpose, and they all each have their pros & cons. We initially started with a simple count of unique device_ids for each day but since providers may swap out vehicles mid-day, this ended up being misleading. This is correct, except for Total Vehicles we would also count a vehicle that only had a "removed" status in addition to the other 3 + we add any ids that were seen in trips. There shouldn't be any vehicle ids from trips that were not already in a status change, but it's a conservative choice just to be sure. Presumably vehicles with only a "removed" event also had other events, but those fired on a previous day before the query window. The idea with Total Vehicles is to include basically anything that hit an endpoint. The snapshot approach is a little more intuitive, but the revenue hours is a little more flexible. We're currently working with both now. Thanks for sharing these detailed notes. I would love to include as many valid approaches as possible, and will take a look at revenue hours as one option alongside what we have today. Based on feedback, we have added additional detailed fleet size metrics based on reconstruction of the status_changes feed. This functionality is available in v2.0 More details here: https://github.com/sharedstreets/mobility-metrics/issues/77
GITHUB_ARCHIVE
A reader pointed out to me that the RSS page on my blog hadn't been updated since May. That must be when I updated PyBlosxom to the latest version and switched to the python3 branch. Python 2, as you probably know, is no longer going to be supported starting sometime next year (the exact date in 2020 doesn't seem to be set). Anyway, PyBlosxom was one of the few tools I use that still depend only on Python 2, and since I saw they had a python3 branch, I tried it. PyBlosxom is no longer a project in active development: as I understand it, some of the developers drifted off to other blogging systems, others decided that it worked well enough and didn't really need further tweaking. But someone, at some point, did make a try at porting it to Python 3; and when I tried the python3 branch, I was able to make it work after I made a couple of very minor tweaks (which I contributed back to the project on GitHub, and they were accepted). Everything went fine for a few months, until I received the alert that the index.rss and index.rss20 files weren't being generated. Curiously, the RSS files for each individual post are still there; just not the index.rss and index.rss20. I found there was already a I tried the workaround mentioned in the bug, at the same time adding Atom to the RSS 0.9.1 and RSS 2.0 flavors I've had in the past. I haven't generated Atom for all the old entries, but any new entries starting now should be available as Atom. Fingers crossed! if you're reading this, then it worked and my RSS pages are back. Sorry about the RSS hiatus. [ 09:10 Oct 10, 2019 More blogging | permalink to this entry | I reviewed my Archos 5 Android last week, but I didn't talk much about my main use for it: offline reading of news, RSS feeds and ebooks. I've been searching for years for something to replace the aging and unsupported Palm platform. I've been using Palms for many years to read daily news feeds; first on the proprietary Avantgo service, but mostly using the open source Plucker. I don't normally have access to a network when I'm reading -- I might be a passenger in a car or train, in a restaurant, standing in line at the market, or in the middle of the Mojave desert. So I run a script once a day on a network-connected computer to gather up a list of feeds, package it up and transfer it to the mobile device, so I have something to read whenever I find spare time. For years I used Sitescooper on the host end to translate HTML pages into a mobile format, and eventually became its primary maintainer. But that got cumbersome, and I wrote a simpler RSS feed reader, But on the reader side, that still left me buying old PalmOS Clies on ebay. Surely there was a better option? I've been keeping an eye on ebook readers and tablets for a while now. But the Plucker reader has several key features not available in most ebook reader apps: - An easy, open-source way of automatically translating RSS and HTML pages into something the reader can understand; - Delete documents after you've read them, without needing to switch to a separate application; - Random access to document, e.g. jump to the beginning or end, or - Follow links: nearly all RSS sites, whether news sites or blogs, are set up as an index page with links to individual story pages; - Save external links if you click on them while offline, so you can fetch them later. Most modern apps seem to assume either (a) that you'll be reading only books packaged commercially, or (b) that you're reading web pages and always have a net connection. Which meant that I'd probably have to roll my own; and that pointed to Android tablets rather than dedicated Android as a reader All the reviews I read pointed to Aldiko as the best e-reader on Android, so I installed it first thing. And indeed, it's a wonderful reader. The font is beautiful, and you can adjust size and color easily, including a two-click transition between configurable "day" and "night" schemes. It's easy to turn pages (something surprisingly difficult in most Android apps, since the OS seems to have no concept of "Page down"). It's easy to import new documents and easy to delete them after reading them. So how about those other requirements? Not so good. Aldiko uses epub format, and it's possible (after much research) to produce those using ebook-convert, a command-line script you can get as part of the huge Calibre package. Alas, Calibre requires all sorts of extraneous packages like Qt even if you're never going to use the GUI; but once you get past that, the ebook-convert script works pretty well. Except that links don't work, much. Sometimes they do, but mostly they do nothing. I don't know if this is a problem with Calibre's ebook-convert, Aldiko's reader, or the epub format itself, but you can't rely on links from the index page actually jumping anywhere. Aldiko also doesn't have a way to jump to a set point, so once you're inside a story you can't easily go back to the title page (sometimes BACK works, sometimes it doesn't). And of course there's no way to save external links for later. So Aldiko is a good book reader, but it wouldn't solve my feed-reading And that meant I had to write my own reader, and it was time to delve into the world of Android development. And it was surprisingly easy ... which I'll cover in a separate post. For now, I'll skip ahead and ruin the punch line by saying I have a lovely little feed-reading app, and my Archos and Android are working out great. [ 15:14 Dec 20, 2010 More tech | permalink to this entry |
OPCFW_CODE
We are really glad to announce that two eminent speakers have agreed to deliver talks at our event. Dr. Preslav Nakov, Senior Scientist, Qatar Computing Research Institute Brif Bio: Preslav Nakov is a Senior Scientist at the Qatar Computing Research Institute, HBKU. His primary research interests include computational linguistics, machine translation, lexical semantics, Web as a corpus, and biomedical text processing. Dr. Nakov co-authored a Morgan & Claypool book on Semantic Relations between Nominals, two books on computer algorithms, and over a 100 research papers. He received the Young Researcher Award at RANLP'2011. He was also the first to receive the Bulgarian President's John Atanasoff award, named after the inventer of the first automatic electronic digital computer. Dr. Nakov is an Associate Editor of the AI Communications journal, and a member of the SIGLEX board. He co-chaired SemEval'2014, SemEval'2015, and SemEval'2016, and co-organized several SemEval tasks, e.g., on sentiment analysis on Twitter, on semantic relation extraction, on the semantics of noun compounds, and on community question answering. Dr. Nakov received a PhD degree in Computer Science from the University of California at Berkeley, and he has a MSc degree form the Sofia University. He was a Research Fellow at the National University of Singapore, a honorary lecturer in the Sofia University, and a research staff in the Bulgarian Academy of Sciences. Sentiment Analysis on Twitter: a SemEval Perspective Internet has democratized content creation leading to the rise of social media and to an explosion in the availability of short informal text messages. Microblogs such as Twitter, weblogs such as LiveJournal, social networks such as Facebook, and instant messengers such as Skype and Whatsapp are now commonly used to share thoughts and opinions about anything in the surrounding world. This proliferation of social media content has created new opportunities for studying public opinion, with Twitter being especially popular for research purposes due to its scale, representativeness, variety of topics discussed, as well as ease of public access to its messages. I will present the evolution of a semantic analysis task that lies at the intersection of two very trendy lines of research in contemporary computational linguistics: (i) sentiment analysis, and (ii) natural language processing of social media text. The task was part of SemEval (the International Workshop on Semantic Evaluation, a semantic evaluation forum previously known as SensEval), and it ran in 2013, 2014, and 2015, attracting the highest number of participating teams at SemEval in all three years; there is an ongoing edition in 2016. The task included the creation of a large contextual and message-level polarity corpus consisting of tweets, SMS messages, LiveJournal messages, and a special test set of sarcastic tweets. The evaluation attracted 44 teams in 2013, 46 in 2014, and 41 in 2015, which used a variety of approaches. The best teams were able to outperform several baselines by sizable margins with improvements over the years. The task has fostered the creation of some freely-available, and by now widely used, resources such as NRC's Hashtag Sentiment lexicon and the Sentiment140 lexicon, which the NRC-Canada team initially developed for their participation in SemEval-2013 task 2, and which were key for their winning the competition. The 2015 and 2016 editions of the task switched focus to sentiment with respect to a topic, on a positive/negative/neutral (2015) or on a five-point scale (2016). The latter is used for human review ratings on popular websites such as Amazon, TripAdvisor, Yelp, etc. From a research perspective, moving to an ordered five-point scale means moving from classification to ordinal regression. Another shift in 2015 was from sentiment towards a topic in a single tweet to sentiment towards a topic in a set of tweets (trend detection). This represents a move from classification to quantification. In real-world applications, the focus often is not on the sentiment of a particular tweet, but rather on the percentage of tweets that are positive/negative. In 2016, trend detection will be also offered on a five-point scale, which gets us even closer to what business (e.g., marketing studies), and researchers, (e.g., in political science or public policy), want nowadays. From a research perspective, this is a problem of ordinal quantification. I will introduce the above subtasks in detail, and I will explain the process of creating the training and the testing datasets. I will briefly describe some of the participating systems, and the overall results, in comparison with a number of baselines. Special attention will be paid to the lessons learned with analysis across a number of dimensions such as progress over the years, performance on out-of-domain data, utility of context in the contextual polarity task, impact of training data size, of using external resources, of negation handling, etc. Finally, I will compare the task to other related tasks at SemEval and beyond. Brif Bio: Prof. Verma is now Professor and the Dean (Research & Development) at IIIT Hyderabad, India. His research Interests are in the broad areas of Information Retrieval, Extraction and Access. More specifically: social media analysis, cross language information access, summarization and semantic search. He also works in the areas of Cloud Computing and Reuse in Software Engineering. He is also the CEO of IIIT Hyderabad Foundation, which runs one of the largest technology incubators in India. The Foundation manages IIIT-H’s IP and technology transfers.
OPCFW_CODE
Room: B008, Edificio Biblioteca, Campus de Fuenlabrada (Universidad Rey Juan Carlos) Camino del Molino, s/n, 28943 Fuenlabrada - Madrid, España. Tel: +34 914888460 Fax: +34 914667500 On this page you can find information about: The courses and lectures I am involved. I did my PhD with Peter McClintock at Lancaster University in England on nonlinear dynamics and stochastic processes. I became interested in path integrals, solid state, low temperature physics, computers, and electronics. There I worked on: Thermal ratches with quasimonocromatic noise. Prehistory problem for large fluctuations in noise-driven systems. The Kramers problem for a multi-well potential. Phase transition in a system driven by coloured noise. Then I got a postdoc position at Universiteit Leiden in The Netherlands, and the office at CWI with Ute Ebert. I did research on low temperature plasma physics, and a bit on Brownian ratchet experiment on the surface of HeII, quantum dissipative systems and quantum computing. The papers I wrote there were about: Patterns formation in electric discharges: streamers branching. Analytical derivation of the critical turbulence for phytoplankton bloom development. I have a position since 1-9-2001 at Universidad Rey Juan Carlos in Spain. I am working on and/or contemplating: Plasma physics and electric discharges. Turbulence in HeII. Nanofluids and cone-jets. Reaction diffusion equations. For current information you can visit the web of the EMFF research group. Presently I teach first year Physics and Optics and Acoustic. All the information and notes about these undergraduate courses can be found here. I am also giving a Physics conceptual course at the Universidad de Mayores. For a list of papers follow this link. A copy of the book Electromagnetismo, Circuitos y Semiconductores edited in Spanish by the Editorial Dykinson can be obtained here. This is an introductory book for a first course on Electromagnetism, Circuits and Semiconductors, written in collaboration with J.L. Trueba. Also a draft copy of the book Cómo es nuestro mundo: del átomo al cosmos. edited in Spanish by the Editorial del Orto, Ediciones Clásicas, can be downloaded here. This is about the set of lectures I deliver for a general public since 2002 at the Universidad de Mayores, URJC. Last modified: 1 January, 2009
OPCFW_CODE
(UPDATE: I’ve decided to update this post slightly to also reflect my increasing level of experience.) I was asked to help out a neighbouring startup in Berlin with their interview process for an iOS developer. Seeing as I continually seem to be the resident expert (I really don’t know how this happened and I look forward to the bigger fish!) I thought I’d put my thoughts down on paper as to how I would prepare for such an interview. So, here it is, should you be in a situation where you need an iOS person. Feel free to comment. There is always a bigger fish, but I have amassed a lot of iOS experience so far. - How much experience / how long have you worked with Objective-C / Cocoa / iOS ? - What was the first version of iOS you worked with? - How are your memory management skills? (I only know ARC, I know about release/retain, I know about how to make both types work together) - Do you have experience with… - Customizing UIViews with drawRect: (i.e. CoreGraphics / QuartzCore ) - CoreAnimation / CALayer manipulation - Container View Controllers ? (i.e. childViewControllers on iOS 5+) - Customizing standard UI Components (UINavigationBar, UITabBar, etc.) - Custom UITableViews, UITableViewCells - Laying out Views (i.e. all layout code, setting frames, belongs in layoutSubviews) - What Networking Frameworks are you familiar with / prefer? e.g. ASIHTTPRequest, AFNetworking, RESTKit, MKNetworkKit, etc. - Have you ever worked with blocks / written Block-based API’s ? - What about categories? Why would you typically write categories? And with code you have source for? - How do you tend to handle persistence on the device? (e.g. NSCoding, CoreData, NSData with filepaths) - Do you have experience with Asynchronous Image Loading? i.e. subclass/category on UIImageView - What do you have to be careful of? (e.g. cancel image loading request when view disappears / is to be reused in tableView, don’t block UI on main thread, instead do image operations on a background thread) - Do you have any favourite open source libraries for this, or do you create your own? - In addition, what are your favourite Open Source Libraries / Use most often? Why? - Back to asynchronous communication. How do you approach this in general? (e.g. you update one data object from the server on which other objects in your App need to know about) KVO? NSNotificationCenter? etc - Have you discovered the greatness of NSOperation technology and are you competent with concurrency? How might you create a block-based API that does work on a background thread and returns its result to the main thread? - Do you have a public repository or a .zip file you could provide so we might look at your code examples? - If you have a problem you need to solve and aren’t sure at the beginning how, what are your typical strategies to solving such problems? (A vague question) - How good are you at debugging? i.e. NSLog tracing, breakpoints, Instruments, or what? - (There is no correct answer, only to get a sense of the candidate’s style) Are you formal in your design approaches (e.g. UML Diagrams), or are you more of a cowboy? (Code first, design later). How do you approach writing your classes? (i.e. write headers first, comment what they are supposed to do, then implement later. Pen and paper first, then coding. Etc etc) - What is your opinion / commentary on the topic of encapsulation? - How many Apps have you worked on that have been published to the AppStore? - Any you are particularly proud of, and why? What was your role on them? - Do you have experience dealing with Provisioning Profile management? - What part of iOS Development do you like most? e.g. writing custom components, designing the App’s architecture, integrating foreign services (like other webservices), etc. or, If someone asked you what area of the app you would like to primarily develop, what would you pick and why? (basically to determine what *kind* of programming he’s into) - Experience with working with Maps (MapKit framework?) - (on formality) To what extent do you document your code? e.g. Zero, Commented Headers, Official doxygen/appledoc style headers. - To what extent do you use Instruments for debugging? (UPDATE: As a companion to this post, please also see my thoughts on technical interviews) SECOND UPDATE: I also have the ultimate iOS Developer technical test you can assign a potential hire. It should take 1-3 hours. It is easy to communicate, allows a lot of freedom of implementation so you can really get a better picture into how a developer thinks, and will make sure this developer knows the absolute fundamentals. Ready for it? Calculate the and display each Fibonacci number from 1 -> max N possible on an iPhone with unsigned integers, and display each F(n) in a table view. The UITableView scrolling MUST remain smooth. That’s it. You’ll be amazed at how profoundly simple this task sounds and yet how much iOS knowledge can be demonstrated. Not just what they know, but how they structure their work. You can assess their APIs, their separation of concerns when designing classes, the considerations they’ve made for performance, and their knowledge of concurrency. (Not to mention their knowledge of recursive functions.) It is ok to give them the formula, and allow them to use Google. F(n) = F(n-1) + F(n-2). Please let me know if any of this post was useful to you! Cheers, S.
OPCFW_CODE
I’ve just started using duplicati in a linuxserver.io Docker container, with the web GUI. What are the best practices for backing up to Amazon Glacier? I found this page, but it’s from 2013. As far as I can tell: - Set the destination to back up to an S3 bucket - Don’t set the storage class to “GLACIER”, it does nothing. - In the AWS management console, set up a lifecycle rule for the bucket to move objects with names starting “duplicati-b” (but not others) to Glacier after a day or two. Is this correct? Does it work well? Are there any other special options I need to add? To the best of my knowledge, Glacier isn’t ideal because Duplicati needs to read the remote files for verifying, compacting, etc, and Glacier retrieval is too slow to work well with that. There are probably configurations within Duplicati that would allow you to use Glacier, but that is a significantly more involved process. Well, that’s what I’m asking about. What are those configurations? I just did this the other day using that old page you found. It worked ok. See Duplicati and S3 Glacier - Features - Duplicati Thanks. Are there any problems with Duplicati trying to read the glaciered S3 objects? Glacier doesn’t allow immediate read access to the objects. As such, Duplicati fails when it tries testing the files. I personally advise against using archive tier storage. (You can get hot storage at about the same cost as Glacier from other cloud providers such as Backblaze B2 or Wasabi.) Google does have an archive tier that allows for near immediate access, so that’s another option. If you really, really want to use Glacier, then you should set these options: --no-auto-compact = true --no-backend-verification = true These should get your backups to run without error. But note that I don’t know how restores even work, since object availability on archive tier storage is measured in hours. I imagine Duplicati will fail on restores unless you move the objects to a higher tier. You should definitely test this thoroughly. That’s unfortunate. The 2013 article says “The new storage engine for Duplicati 2.0 was designed with support for Amazon Glacier in mind.”, but I guess it turned out to be harder than they thought? Dunno… I guess it works if you use the two options I mentioned, but I don’t use it myself. I am curious how restores work when object availability can take up to 12 hours. Maybe someone who has done it can share their experience.
OPCFW_CODE
Originally posted by Micah Burke Ok, once more, I've replaced my uxthemes.dll and I can access my themes (but not vis styles) but I am not seeing them properly and if i load them it screws up my desktop so that I cannot read anything. I hate to be a whiner like this, but I am not sure what I am doing wrong. It seems like this enabled the ability to change themes but doesn't appropriately activiate them and their styles. PLEASE PLEASE PLEASE help! Originally posted by andyp Just choose the themes as you normally would with StyleXP! how do you access your themes Did just that and still not working.Originally posted by SnOoPy What I did was: install stylexp 3 after uninstalling any newer versions (style xp 3 and earlier just overwrote the UxTheme.dll...whereas stylexp 4 and above patch the UxTheme.dll) then copy this file (C:\windows\system32\UxTheme.dll) to another partition, folder, floppy, CD-RW, etc. uninstall stylexp 3. Uncheck system file protection. boot with a windows 98 bootdisk. overwrite the file (C:\windows\system32\UxTheme.dll) with the one you previously created. Originally posted by andyp Here's a complete guide on how to hack your uxtheme.dll file. NOTE1: You don't have to do this if you have already installed an older version than the 4.0 NOTE2: If you still want to use StyleXP Email TGT, tell them you were using their beta software, and that it expired. They've always maintained that all betas will be free, so they should give you a registration key to fix the problem. Either that or just download the newest beta. They've also promised to always release a new beta before the old one expires. Either way - you should not have to pay a cent and hopefully you can get it fixed quickly. 1) Backup UxTheme.dll to UxTheme.000 2) Don't blame me if you screw this up, these directions are as is in their somewhat correctness and I am not responsible for the government raid that will happen if you conduct this change. I have tested this and it works for me and the 500 other folks who have used this method. Do not put tinfoil in the microwave. This is very simple to do and amuonts to changes to 17 values and takes less than five minutes to do. This also assumes you don't have StylesXP installed on your system and that you are working with the original UxTheme.dll 3) Open UxTheme.000 in a hex editor, not resource hacker. 4) Replace the following 4 sections in the hex code with the values I provide. You will need to use the search function in the editor to find the exact address within the file to perform each edit New Value: 00 Address: 0x0000B624 --> 0x0000B629 Original: 0F 8C 80 00 00 00 New Value: 90 90 90 90 90 90 Adress: 0x0000B6BB --> 0x0000B6C2 Original: 81 EC 80 00 00 00 56 57 New Value: 33 F6 8B C6 C9 C2 08 00 Address: 0x0000B71E --> 0x0000B71F Original: 7C 38 New Value: 90 90 5) Check your changes, make sure you did not use the letter O in place of zero. SAVE 6) Reboot to safe mode, rename UxTheme.dll to UxTheme.bak ... and rename UxTheme.000 to UxTheme.dll Download hex-edit program here: http://www.tweakersguide.net/files/hexedit.zip
OPCFW_CODE
Boreal and sub-arctic forest dynamics in the face of climate warming The boreal forms a circumpolar belt of forest between 45˚ and 70˚N and is the second largest forested biome. It is important in both climate regulation and the global carbon cycle. Boreal forests occupy latitudes that are expected to warm most dramatically over the coming decades, and evidence indicates that changes are already underway in these systems. The boreal extends over a large climatic gradient thus mechanisms underlying boreal ecosystem response are likely variable, necessitating a process-based understanding across latitudes. Because permafrost thaw is one of the most immediate and system-altering manifestations of climate change across the entire boreal, our focus is on documenting responses of forests impacted by the presence of and changes in permafrost coverage and/or depth and the linkages and feedbacks of these changes with hydrology (Quinton and Hayashi) and gas flux (Sonnentag). The southern margin of permafrost is especially susceptible to thaw-induced land-cover change, since in this region, the permafrost is discontinuous, relatively thin (< 10 m), warm and ice-rich. Climate warming appears to have disrupted the balance between degradation and aggradation processes in these systems. In zones of discontinuous permafrost, permafrost forms the physical foundation on which trees develop, forming tree-covered peat plateaus where trees are likely to contribute to permafrost maintenance and aggradation processes through the reductions in radiation load and changes in snow accumulation. Evidence suggests that warming is leading to permafrost thaw and surface subsidence, which decreases forest cover while increasing wetland hydrological connectivity. Despite dramatic impacts of warming on the hydrology of these systems, we know little about corresponding changes in plant communities, interactions or feedbacks between vegetation and hydrological responses or impacts on ecosystem productivity. Northern boreal forests near the forest-tundra transition have a markedly different set of limitations but are also likely to be highly sensitive to warming processes. Here, permafrost is continuous but the northern extent and spatial distribution of the forest is considered temperature-limited. The permafrost at the forest-tundra transition is continuous and relatively thick (>100 m). As such, permafrost degradation is characterized by a thickening active (seasonally thawed) layer, with changes to soil moisture and carbon cycling. The implications of these changes for plant communities have received much less study than warming-related distributional shifts. Quantification of dynamic responses of trees near the transition zone indicate reduced growth and decoupling of growth-climate relationships; however, data aimed at quantifying response of these forests to climate change are limited. This aspect of our research focuses on the 550,000 km2 Taiga Plains ecoregion that occupies 48% of the Northwest Territories’ landmass, and spans much of the latitudinal gradient of the Territory and its associated permafrost, climate and vegetation gradients. Large forest dynamics plots will be established at the end member sites (Scotty Creek and Havikpak/Trail Valley Creek)and form the first boreal sites within the Smithsonian Institute’s Global Earth Observatory Network (SIGEO; http://www.sigeo.si.edu/). These larges plot sites will be connected by a representative and spatially extensive network of pre-existing small permanent sample plots through collaboration with the Government of the Northwest Territories ENR Forestry Division (see Laurier-GNWT Partnership website: http://www.wlu.ca/homepage.php?grp_id=12612).
OPCFW_CODE
I'd be happy to share how I use a disk imaging software for system protection. There are several free disk imaging systems that work very well for doing a full drive image. When it comes to selecting one, my criteria is as follows. I want one that has its own routine for scheduling and allows me to easily select when it should run. I want one that makes incremental additions to the initial base in order to conserve drive space while speeding the daily process to less than 15 minutes. I want one that supplies the means to create a bootable rescue media that can be used when the system fails to boot so that I can deploy a workable image and easily get back in operation. I want one that will allow me to quickly (like 15 seconds) mount any chosen image date and select to recover some individual file in a manner like browsing with Windows Explorer. Lastly, I want one that is user friendly since I will be using it on several computers that I don't have physical access to and would like to keep it simple enough that it's easy to lend phone support to those who aren't all that computer savvy. With those criteria in mind, my latest selection is [link removed] It has all of those features. It is as user friendly as you will get and still have all the flexibility and power you want. The free version is good enough for most individual users. Business users should get the paid version. You will need a separate drive, either internal or external to store the image archives on. I use a 759 gig USB drive. They can be had for approx. $60. A month of daily images on my system will amount to approx. 120 gigs total. Alternately, you can use a network drive. On the first day of each month I create a new base image of my system drive. Each following day of the month, I create an incremental backup that is related to that base image. The base image takes approximately 40 minutes to create. The incremental takes less than 10 minutes each day at first boot. Since I have that set for first daily boot, the system burden is less noticeable. With this schedule, it isn't important to have the computer running every day since a missed scheduled act won't compromise any following schedules. I choose to make daily incremental images for myself since my system gets bashed by me testing software and operating fast and free. For others whom I support, I may choose weekly or even monthly. The native schedule Aomei offers has those options so I need not write batch script like in the old days. Even so I use a simple batch file to maintain and trim the archives so that I keep no more than two month's worth and never less than one month's worth. You may do that manually or with a simple batch file. I'll publish that if asked. Trust me, I have tried, tested and used practically every disk imaging system that is available, both the paid and the freebie, and I find Aomei Backupper is first cabin. I just want something that I don't have to get involved with once it's set up, yet something that I can rely upon to work as I expect it to when the need arises. To maintain our status as an educational organization, the only commercial links allowed in this forum are to CPAP-related manufacturer websites. This is stated in the Apnea Board Rules with details given in the Commercial Links Policy section. Moderator Action: Link Removed
OPCFW_CODE
Like many of today’s developers. I started my IT career as a Flash Developer, and spend much of my time on learning new things from my senior friends, books and blogs about Flash Platform and get idea that there are lots of things to learn like Software Development Life Cycle (SDLC), Software Design Patterns, Coding Standards and many more. With expanding knowledge, I understand that architecture, frameworks and coding standards are important. Before starting discussion about application architecture, would like to share about preface for this post. There are different technologies are available, we pick one technology, get knowledge for that, practice on that technology’. We live in technologies limitations or boundary and competing with other technologies. It is difficult enough to truly master a single platform. Of course many developers are experts in multiple languages, but mostly their knowledge and development practices of each language are different from each other. For example, It you want to develop an application in two different languages (let say) Flex and Python. Your knowledge of Flex doesn’t give you any advantage for the Python application development. But I would like to say that knowledge of architecture or framework will help you here for the application development in both the languages. And that’s the reason behind the post on application architecture. From a set of frameworks, I selected PureMVC as my framework of choice. Before starting about PureMVC, let’s discuss about what is framework. As I understand Frameworkis a reusable set of libraries or classes for a software system, which follow some rules throughout the application. These rules are known as Design Patterns. So in other words we can say a framework is a collaborating set of design patterns. Frameworks are helpful as it give us the flexibility to implement the fastest solution to given problem. Currently there are lots of discussions going on between Flash Platform & HTML5 features, development tools, and usage and development standards. But I think there is one topic ‘architecture‘ which is common and useful for all technologies or languages, application architecture/ framework is the heart of application standard, performance and scalability. Technology doesn’t matter for that. So for the reference example, I have selected my favorite platform to explain application architecture. The Adobe Flash Platform is superb for developing rich experiences, including websites, games for web desktop and mobile users. From last few years lightweight, interactive run time has become the ideal choice for expressive media-centric web software. There is a major benefit of Flash Platform can experience as it has expanded to the desktop and well with Adobe AIR. Goals of PureMVC PureMVC is lightweight framework based upon well known Model-View-Controller design pattern. Main goals of PureMVC frameworks are: - To separate application’s coding into three different tiers: model (data), view (UI) and controller (business logic) - For speedy implementation with scalability and maintainability - Reduce complexity from developer Also one major benefit of using well known formal framework is common design patterns and way of adding development. From organizational point of view, usage of such great standard produce clean coding standard of applications and reduce dependency on developers, reduce knowledge transfer time for new developer. I think it’s a best and most attractive benefit of using standard and well known framework. Here are some reasons why I like PureMVC: - It is easy to learn, great documentation, samples and tutorials, easy to use and also easy to extend. - It facilitates loosely coupled application architecture (Publish and Subscribe- type), scalability, maintainability and portability for you application. Developers agree that separating an application’s code into different part based on its functionality. These separations are three major areas: model, view and controller. Let’s have a quick overview of these terms: the model is for your data, the view is for user controls, user interaction, and the controller decides how model change when view is clicked and how view should updates when model updated. You can get basic idea about PureMVC framework from below conceptual diagram: A forth singleton Façade, simplify development by providing single interface for communication throughout application. Below is the base overview of each singleton: - Model: manipulating the data model and retrieve data from remote services - View: mainly refer named mediators that adapt view component. - Controller: named mapping command classes, which are only created one needed. - Façade: initializes core singletons (model, view and controller) and provide a single interface for communications. Will have to use Façade and other actor classes (like Mediator, Proxy/delegate and commands), to interact with singleton as shown below: Note: In next Part, will update sample applications implemented with PureMVC framework using different programming language.
OPCFW_CODE
Web Hosting Tools, Scripts and Software There are many factors to consider when searching for a web hosting provider or even starting your own web hosting business, for example: billing, customer support, payment processing, control panels, server monitoring, backups, Operating System, security and much more. The following is a collection of web hosting tools, software, scripts, control panels and other useful applications commonly used in the web hosting industry arranged in alphabetical order. You should familiarize yourself with each of these tools before making any decisions about web hosting or related services. Automated Software Installers - Fantastico Deluxe - Commercial auto-install script repository used in conjunction with a control panel such as cPanel. Provides an automatic install of various scripts, including Content Management Systems and community forums, within seconds. - Installatron - Multi-platform automatic script installer compatible with the Linux, BSD and Windows Operating Systems and cPanel, DiectAdmin, InterWorx, Kloko and Plesk control panels. - Softaculous - Leading software auto-installer compatible with cPanel, Plesk, DirectAdmin, InterWorx and H-Sphere. Softaculous currently features a still growing collection of 269 scripts and 1,000 PHP classes that can be easily and automatically installed with one click of the mouse through the supported control panels. Backups and Data Protection - CDP Standard and Enterprise Edition - Developed by Idera, CDP Standard (freeware) and Enterprise Edition (paid) are server backup tools that can create a copy of your data in minutes, backup to any secondary storage device, and give you the capability to restore/manage files remotely using a web-based GUI. - R1Soft CDP Backups - Disk to disk backup solution for Windows and Linux Servers. R1Soft offers very affordable Continuous Data Protection (CDP). - SiteVault - Professional website and MySQL database backup software. Create and restore backups and your entire website and databases. - WHMEasyBackup - Automatic backup application designed to simplify scheduled backup of your entire WHM reseller account and all domains hosted in that reseller account to any desired destination. Client Management, Billing and Customer Support - BoxBilling - Completely free application for billing, customer support and client management. - ClientExec - Client management and support system that integrates with most of the popular hosting control panels and domain registrars, which allows hosting accounts to be created automatically. - HostBill - HostBill incorporates billing, customer support and hosting account automation in one package. - Kayako Help Desk Software - Customer service software application with live chat support. Consists of three featured products: Kayako Fusion (multi-channel helpdesk), Kayako Engage (live customer support with chat) and Kayako Resolve (helpdesk ticket system). - osTicket - Open source ticket system that can be used for customer support of web hosting clients. osTicket organizes submissions sent via phone, email and online forms into a single simple web interface. - WHMCS - Completely automated system to manage clients, from billing to customer support. Gives full control of all facets of customer management from the point of signup to cancellation. Content Delivery Networks - CloudFlare - CloudFlare provides free and commercial, cloud based services such as a CDN (Content Delivery Network) and optimizer that can be used to strengthen security and improve the performance of websites. - CloudFront CDN - CloudFront is Amazon's content delivery service that offers low latency through a large network of edge servers across the world. CloudFront has no minimum fee structure. You only pay for the bandwidth your website consumes (per gigabyte). - MaxCDN - Very popular CDN used by the likes of Stack Overflow, WP Engine and Yoast.com. As of 8/27/2012 MaxCDN is offering the first terabyte of data transfer for free. - OnApp CDN - Feature rich Content Delivery Network framework that enables hosting service providers to setup a global CDN without having to deal with expensive infrastructure costs. OnApp CDN gives you access to 91 POPs (Points of Presence) in 34 countries. - cPanel and WHM - cPanel is a very popular control panel used in the administration of web hosting accounts to quickly and easily navigate the various web hosting aspects of a website with the use of a GUI (Graphical User Interface) and automated tools. It's the Swiss Army Knife of web hosting. WHM (WebHost Manager) is a companion control panel to cPanel that provides the ability to manage multiple hosting accounts on a dedicated server by either the server owner or hosting resellers. - DirectAdmin - Web hosting control panel built upon ease of use, speedy performance and stability for an affordable price. - Plesk - Plesk is a web-based control panel used to automate the routine tasks associated with running a web hosting business and daily server management. - MySQL - Widely used open source database platform that is extremely reliable, easy to implement and performs very efficiently. - phpMyAdmin - Free tool developed in PHP used in the management of MySQL. phpMyAdmin supports many operations through a user interface and includes the capability to execute all SQL statements. - CentOS - Free, enterprise-class distribution of Linux used on the servers of many web hosting providers. - CloudLinux - Operating System specifically optimized for a shared hosting environment to provide better performance and greater stability. With CloudLinux, each shared hosting client is allocated their own resource limits in a similar fashion to virtualization technology. This means one abusive user on a server doesn't affect other clients. Scripting Languages and Web Frameworks - Perl - Perl is a dynamic programming language with more than 24 years of research and expansion. - PHP - PHP is a vital tool in the web hosting industry. PHP is a very popular, well supported and highly documented scripting language used by the vast majority of web developers that can be embedded in HTML files. PHP is available free of charge. - Ruby On Rails - Open source web framework optimized for programmer happiness and sustainable productivity. - BulletProof Security Plugin - Security plugin for Wordpress installs that offers .htaccess protection from hacking attempts using SQL injection, Base64, XSS, CRLF, CSRF, RFI and more types of code injection. BulletProof Security can alert you to unsecure files with improper chmod permissions and secure your wp-config.php, php.ini, php5.ini and wp-admin Wordpress files and directories from hacking attempts. - Chap Secure Login - This Wordpress plugin encrypts your password when logging in using the SHA-256 hash algorithm and Chap protocol. - ezeelogin - Server management application that behaves as a SSH gateway and provides secure storage of root passwords using 4096-bit RSA encryption. - KVM - KVM, short for Kernel-based Virtual Machine, is a virtualization framework for Linux running on x86 hardware. - OpenVZ - Linux software solution that provides container-based virtualization. Enables the creation of multiple Virtual Private Servers on a physical machine that give users root access and independent rebooting. - SolusVM - Software used to manage VPS clusters with an easy to navigate GUI (Graphical User Interface). Compatible with OpenVZ, KVM, and Xen virtualization technologies. - Xen Hypervisor - The leading open source software for virtualization. Xen is fast, secure, and has very little overhead. It is compatible with a large portion of operating systems, such as: Linux, Solaris, Windows and many flavors of BSD. If you know of any useful web hosting tools, scripts, software applications or other resources related to web hosting that have been omitted from the list, please let me know.
OPCFW_CODE
In this blog we take a look at what makes up Azure Integration Services, its history with BizTalk Server, our work in integration and the Ballard Chalmers’ Data Integration Hub Framework. THE HISTORY OF AZURE INTEGRATION SERVICES Azure Integration Services is the latest Microsoft Azure toolset for creating data integrations between IT systems. It replaces Biztalk Server which has existed since 2000. Biztalk has served well, but it can now definitely be categorised as a legacy system. It needs to be installed, configured, patched and managed by professional system administrators. If resilience and scalability are needed, then at least two Biztalk Servers are required and these both need to be Enterprise edition, all of which increases the cost significantly. Finally, BizTalk uses SQL Server as its data store for all message queues and so at least one SQL Server needs to be involved in the deployment and that requires professional system administrator support as well. Notwithstanding all that; Biztalk does have all the features needed for integrating enterprise systems including: - Adaptors for just about every data source including: FTP, HTTP, SQL and systems such as Oracle, SAP and EDI including AS2, X12 and Edifact - Schema management, and mapping - Message flow orchestration and transaction management Microsoft briefly toyed with producing a cloud services version of BizTalk called BizTalk Services, but this had greatly reduced functionality and had overlapping functionality with the new and more flexible cloud services already in Azure. So, BizTalk Services was eventually dropped in favour of what is now called Azure Integration Services. WHAT IS AZURE INTEGRATION SERVICES Azure Integration Services is not a new Azure service but instead is a collection of other Azure services. This makes a lot of sense, why try and produce a new service that contains functionality that already exists in other services, just add new functionality where needed and take advantage of the whole of Azure for everything else. The core components of Azure Integration Services are detailed here: Azure Integration Services | Microsoft Azure and consist of: - Logic Apps: Provides data connection and orchestration features - Service Bus: Secure message queuing service - API Management: Provides secure control and throttling features for APIs - Event Grid: Event routing system - Azure Functions: Simplifies complex orchestration problems with event-driven serverless compute - Azure Data Factory: Visually integrates data sources to construct ETL and ELT processes Plus, you can use any other Azure service that you wish such as SQL Azure, Azure Tables, CosmosDB, Blob Storage and much more.Logic Apps provides the following main features: - Orchestration/Workflow management of message flows using a graphical user interface. The example Logic Apps orchestration below, shows a simple email approval workflow: - Connectors allow it to connect to most data sources. A lot of these have been inherited from BizTalk and refactored for the cloud, for example: - EDI connectors for AS2, X12 and EDIFACT: See B2B enterprise integration workflows – Azure Logic Apps | Microsoft Learn - Oracle and SAP etc: See: What are connectors – Azure Logic Apps | Microsoft Learn - Gateways for accessing data on-premises. The gateway is installed on-premises and allows Logic Apps in the cloud to manage data that is stored in on-premises servers, without the need for a VPN In addition, because Azure Integration Services is built on Azure using Azure services, the system is: - Fully managed by Microsoft and so the level of system administrator support is minimal, with no need to patch, backup, or anything like that - New code releases can be automatedly deployed using the release pipeline features built into Azure DevOps - New deployments can be completely managed using Azure Resource Manager (ARM) templates, which can automate the setup of all the Azure Services In summary, Azure Integration Services offers a greatly reduced administration overhead, compared with BizTalk and other systems. TRANSPARITY DATA INTEGRATION HUB FRAMEWORK Transparity have carried out a considerable number of data integration projects over the last fifteen years. Many of the earlier ones were based on BizTalk, some were just developed in C# as extensions of existing systems, and more recently they are based on Azure Integration Services. Although each of these projects involved different source and target systems and data sources, with different schemas and workflows, there is a common pattern to all of them. Some examples of these are: - Different systems have different data schemas and there needs to be a mapping between them. Usually, this is done via a central Canonical schema - Different systems use different IDs for reference data. For example, an order from one system may refer to a particular product using product id 0001 and another system may refer to the same product as product id A021. There needs to be a conversion between these - There is generally some standard business logic and validation that needs to be implemented against the standard Canonical schema, that it is common regardless of the source of the data, and will not change even if one of the source or target systems is replaced with a different system - Some systems can call an API or support WebHooks and so a secure API is needed with firewall protection and control over who can call it, and throttling configuration to protect against one system overloading the system at the expense of others - Other systems are not API centric and so the system needs to be able to source to target data using files, databases, SOAP APIs, or whatever - Finally, the system needs to able to be deployed and released automatically to Azure using automated ARM Templates and Azure DevOps release pipelines Having done all of the above a number of times for different clients Transparity decided there was an opportunity to improve the speed and quality of future integrations by standardising the Data Integration Hub Framework that is used for these projects. The architecture of the resulting framework, which is 100 percent based on Azure Integration Services, is shown in the diagram below: With respect to the framework, it can be observed that: - Some of the logic is encapsulated in Azure functions and in other cases, it is Azure Logic Apps depending on which is easiest, fastest or cheapest for the particular job at hand - Web Application Framework (WAF) is used if external systems are accessing the Hub - Blob storage is used to store messages in addition to Service Bus for messages greater than 256 KB - Standard ARM templates have been written for each of the standard components such as Azure Service Bus to speed up creating new systems. - Standard release pipelines have been deployed to speed up releasing new versions of the system - Standard monitoring is added to the system to track when issues occur or if the system is performing slower than expected Azure Integration Services is a great step forward and a world-class cloud-based system for integrating systems in the cloud and on-premises. When used in combination with the Transparity Data Integration Hub Framework, even the most complex enterprise data integrations can be created easily and risk-free. All using a tried and tested architecture and components based on Azure Integration Services backed up with fully automated deployments and monitoring.
OPCFW_CODE
#include "make_jmp.h" #define ABS(x) ((x) >= 0 ? (x) : -(x)) void make_jmp32(intptr_t src_addr, intptr_t dst_addr) { auto jmp = (JumpInsn *) src_addr; jmp->opcode = JMP_OPCODE; jmp->offset = (int32_t) (dst_addr - (src_addr + sizeof(*jmp))); } void make_jmp64(uintptr_t src, uintptr_t dst) { auto jmp = (Jmp64Insn *) src; jmp->push_opcode = PUSH_OPCODE; jmp->push_addr = (uint32_t) dst; /* truncate */ jmp->mov_opcode = MOV_OPCODE; jmp->mov_modrm = JMP64_MOV_MODRM; jmp->mov_sib = JMP64_MOV_SIB; jmp->mov_offset = JMP64_MOV_OFFSET; jmp->mov_addr = (uint32_t) ((dst) >> 32); jmp->ret_opcode = RET_OPCODE; } void make_jmp(void *src, void *dst) { auto src_addr = (intptr_t) src; auto dst_addr = (intptr_t) dst; int64_t distance = std::abs(src_addr - dst_addr); return (distance < INT32_MIN || distance > INT32_MAX) ? make_jmp64(src_addr, dst_addr) : make_jmp32(src_addr, dst_addr); }
STACK_EDU
In May of 2014 I've launched a site about metroidvanias. It was a hasty launch with a lot of tinkering afterwards. A year later I've decided to remake it from scratch. Initial site used Wordpress as a backend with the Ink theme by CodeStag. I've made a few tweaks and ended up making a rather extensive child theme for a task that was actually more of removing clutter, than adding stuff. So I tuned the typography, changed and added some animations, moved a few things around a bit. And every time there was something in need of fixing I got that feeling of inner panic that I have when I see someone else's code and have no idea what's going on there. I'm not a stranger to stuff like HTML and CSS, I've written a few lines of code here and there. But there's a huge difference between making JS prototype and writing production code. That didn't stop me from adding fullscreen-width images (as seen above) to blog posts a few months before it became possible with a simple setting in original theme. In May of 2014 I've ended up with something that was looking good, but was too much of a hassle to work with. There was still a lot to be done under the hood like removing the usage of Fontawesome (which is awesome, but too much of an overkill for a site with 3 icons). I've started by making a basic HTML-boilerplate for all 4 (and a half) templates the site was using. I've loosely copied the existing site layout and started polishing various layout elements in no particular order. At the same time I was working on something you could have noticed on the previous image. I've always liked Museo, high-legible low-contrast geometric font with nice cyrillic glyphs. Based on that fancy new search I've made a new template for search results differentiating it from the homepage and providing some text description for blog entries. There was also a matter of adding proper responsive styling for nice layouts on mobile and tablet. With my own CSS in place it was much easier to do that hacking something on top of someone else's Wordpress theme. At some point I started thinking that it would be actually a good idea to look for a different backend. Instead of a highly complex and feature-heavy Wordpress I would've prefer something simple, fast and flexible. I was in need of CMS where really important things like server-side image resizing (for mobiles) were taken care of. And this backend also should've been flexible to add few things missing in the base package. I've ended up choosing flat-file GRAV which is awesomely fast and extendable. Since I have a very specific layout in mind, I wrote a plugin for my image output serving it just right with the help of bLazy library to do the lazy-loading. There's tons of small stuff like refined text selection highlight (seen above) or 404 page. And many things just seemed hard for me, but turned out to be quite easy when I approached them. Loading more items ended up being nothing more than a special template and 20 lines of JS. You can't really stop trying to make something better. SqncBrk since then had numerous new blog entries, design improvements, and eventually was redesigned fully again.
OPCFW_CODE
Project of the second version of the FCF framework. At the current moment, the code is being transferred, its structuring from the first version.Detail info Register Date:2022-11-30 06:09 e-Dokyumento is a web-based Document Management System that stores, organizes, indexes, routes, and tracks electronic documents. It automates the basic office document workflow such as receiving, filing, routing, and approving of hard-printed documents through capturing (scanning), digitizing (OCR Reading), storing, tagging, and electronically routing and approving (digital signature) of documents.Detail info Register Date:2022-11-25 10:48 ESP8266 version of Midbar utilizes the 3DES + AES + Blowfish + Serpent encryption algorithm alongside the ESP8266's built-in memory to store eight passwords and four credit cards in the encrypted form. It also utilizes the HMAC SHA-256 to verify the integrity of the stored logins. ESP32 version of Midbar is a password, credit card, note, and phone number vault that utilizes a strong encryption algorithm (AES-256 + Serpent + AES-256) combined with a sophisticated embedded database (SQLite3) to keep your personal data safe. The purpose of Midbar is to significantly increase the cost of unauthorized access to its user's personal data. You can find the tutorial here: ESP8266 Version: https://www.instructables.com/Midbar-ESP8266-Version/ ESP32 Version: https://www.instructables.com/Project-Midbar/ Register Date:2022-11-20 23:11 This is a sound-playing utility for C++ apps that can asynchronously start and stop music loops, as well as initiate transient sounds, and allowing unlimited sound concurrency. It plays WAV files, via OpenAL, and runs on Windows, OSX, and linux platforms. Nice examples for each O.S. are included. It is suitable for any Cpp application that needs music, sound loops or transient sound effects; eg. games. There are no software depencies; this utility is self-contained. * I am currently using it for sound in my OpenGL slider-puzzles app RufasSlider (written in C++): https://sourceforge.net/projects/rufasslider/ Register Date:2022-11-09 06:43 This is a soccer-themed, 3D sokoban puzzle game that includes 3 external solvers and 3 embedded solvers. Register Date:2022-11-09 00:00 This is an Ada utility that can play WAV files on Windows, OSX, and Linux, using Ada tasking and OpenAL libraries. It includes a partial Ada binding to OpenAL. It provides sound-playing capabilities for Ada apps to: * asynchronously start and stop music/sound loops, * initiate transient sounds, * allow unlimited sound concurrency. Examples for each O.S. are included. * Suitable for any Ada application that requires background or transient sound effects; eg. games, simulations. * There are no software dependencies; this package is self-contained. Register Date:2022-11-08 23:45 This is a commandline-terminal sokoban solver written in Ada. It is "generic" in the sense that it contains no domain specific strategies. It also provides a demonstration of the advantage in using the Hungarian Algorithm. * no installation * no dependencies * simply unzip in your Downloads directory, and run. Pre-built executables are provided in 3 variants: * hbox4.exe (Win64) * hbox4_gnu (linux) * hbox4_osx (Mac/OSX) Note that this solver may be run from a thumb drive: Simply unzip onto a USB flash drive formatted to match your system, and run. Register Date:2022-11-08 10:45 A container running hgweb.fcgi via lighttpd. Allows LDAP and Kerberos. More AAA options to come.Detail info Register Date:2022-10-21 02:01 MDR stands for "Mateusz' DOS Routines", it is a collection of *.C files (mostly OpenWatcom-targeted) that contain a variety of routines helpful during the development of DOS application. I am not calling this a "library" because my feeling is that library ought to be strictly organized, have a consistent, uniform API, etc. Here, we really deal with sets of C files. The general focus of my routines is on 8086-class machines (real mode). My compiler of choice is OpenWatcom. Register Date:2022-10-06 05:14
OPCFW_CODE
/** * Registers a permission. * @param {string} permission * @param {*} perm_default: Can be true for anybody to use it, false for permission only, or 'op' for operator only. * @param {string[]} parents */ function registerPermission(permission, perm_default, parents) { try { if (loader.server.pluginManager.getPermission(permission)) { loader.server.pluginManager.removePermission(loader.server.pluginManager.getPermission(permission)); } var perm_default = org.bukkit.permissions.PermissionDefault[perm_default.toUpperCase()] var permission = new org.bukkit.permissions.Permission(permission, perm_default); if (parents) { for (var i in parents) { permission.addParent(parents[i].permission, parents[i].value); } } loader.server.pluginManager.addPermission(permission); return permission; } catch (ex) { log(ex); return null; } }
STACK_EDU
how to run openCV related app , without using OpenCV manager in android In my android application, i use static loading of OpenCV library i.e OpenCVLoader.initDebug() it will return true when run on emulator, but return false when run on (mobile) device. if i use OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_3, this, mLoaderCallback) if OpenCV manager .apk alredy installed,then above code work fine in emulator and device. here i want run openCV realted app, with out intall OpenCV manager.apk. please help me. thanks in advance. possible duplicate of How to integrate OpenCV Manager in Android App Alternate answer: http://stackoverflow.com/a/35135495/5611377 Ok what you are talking is Static initialization of the OpenCV library. See this Official Opencv help regarding this matter. Also have a look at these StackOverflow topic and try compiling your code. Cheers. OpenCV related app is working fine with OpoenCV manager .apk , but i want without OpenCVmanager.apk. i said that "without OpenCVManager.apk , my app is running fine in emulator, but in the device it gives 'forceclose error'". @sathishkumar.challa did you read the Link1 completely ? Please do read every line; your answer is within that doc. ie.- w/o using Opencv manager. Thank you very much, i got the solution (i.e in previous i did not copy the libs all files from native folder). For the Android Studio users out there, I made a blog post detailing what I did to get around the OpenCV Manager App Installation prompt: http://usefuljavanotes.weebly.com/blog/how-to-use-opencv-without-opencv-manager Here is a GitHub repo that includes the code I implemented in the blog post. Feel free to download it and test it on your system: https://github.com/JamieLee629/OpenCVTest The other OpenCV parts of the code not mentioned in the blog post were courtesy of Code Onion Blog: http://blog.codeonion.com/2016/04/09/show-camera-on-android-app-using-opencv-for-android/ I compiled this answer using these sources: https://stackoverflow.com/a/45684065/6030520, https://stackoverflow.com/a/35135495/6030520, https://stackoverflow.com/a/20259621/6030520
STACK_EXCHANGE
The Windows registry is the storage space on your computer where all software settings and System Preferences are stored. When you try to troubleshoot an error, you often see tutorials on how to change, add, or delete a registry key. Sometimes, however, we run into a small problem while trying to edit, delete, or add a registry key. This allows you to take ownership of the registry in Windows 10 so that you can make the changes you want. Why is the registry preventing you from making changes? Windows 10 does not want you to accidentally make changes that could damage your system. Even if you are an administrator, you do not have the right (at least by default) to modify or delete certain keys. In most cases, it’s best not to tamper with registry keys that you don’t have permission to. If you have reason to believe that something is causing the registry problems, you can always reset the registry to fix the errors. However, if you try to configure a part of the system manually and you are sure that you have the technical information to determine that the action you are taking will not damage the system, you can take full permission on the registry keys. Take full ownership of the Windows registry from Registry Editor Before making any changes to the registry, it is a good idea to back it up. So if you delete bad items, you can restore the situation quickly and easily. - Press to open Registry Editor Windows + R, as you type regedit and press Entrance. - Locate the registry key that you want to have full access to. In the picture we have used the key Windows Defender : - Right-click the key and select Permissions. - In the window Permissions, Click Advanced. This opens Advanced security settings. At the top of the window you will see an option Edit and its name Owner key (in our example System). - Click Edit change hands. Type Administrators in the field Enter the name of the selected item and select it, then press ALRIGHT. - This will bring you back Advanced security settings. Double-click Administrators in the list of authorization entries. Select the checkbox next to it Complete control under the heading Basic access rights and press ALRIGHT to save the changes and exit. When you are done, you can change the key as you wish. However, if you find this procedure a bit cumbersome, you can also use a third-party tool to simplify it. Take full ownership of the Windows registry with a third-party tool After launching the application, add the registry address to the key you are trying to take full ownership of. Next, select the user accounts you want to give full control to and click the button Full control section Permissions. Take control of the registry You now have the authority to modify the registry as you see fit. However, it’s best not to mess with the registry if you’re unsure of what you’re doing. In some cases, you may put your system in trouble. For this reason, it is a good idea to always back up your registry. If something goes wrong, you can use a backup to restore the registry to its previous operating state. However, if things don’t go according to plan, you may need professional help, but you can also take care of a lot of Windows registry errors yourself.
OPCFW_CODE
Announcing Scala.js 1.9.0 Feb 14, 2022. We are excited to announce the release of Scala.js 1.9.0! Starting with this release, Scala.js will use its strict-floats mode by default (what previously required Float values are now always guaranteed to fit in 32-bit floating point data, and all Float operations strictly follow IEEE-754 float32 semantics. This release also brings support for java.util.BitSet, and fixes some bugs. It also updates the version of the Scala standard library to 2.13.8 for 2.13.x versions. Read on for more details. If you are new to Scala.js, head over to the tutorial. Bug reports can be filed on GitHub. If upgrading from Scala.js 0.6.x, make sure to read the release notes of Scala.js 1.0.0 first, as they contain a host of important information, including breaking changes. This is a minor release: - It is backward binary compatible with all earlier versions in the 1.x series: libraries compiled with 1.0.x through 1.8.x can be used with 1.9.0 without change. - Despite being a minor release, 1.9.0 is forward binary compatible with 1.8.x. It is not forward binary compatible with 1.7.x. Libraries compiled with 1.9.0 can be used with 1.8.x but not with 1.7.x or earlier. - It is not entirely backward source compatible: it is not guaranteed that a codebase will compile as is when upgrading from 1.8.x (in particular in the presence of As a reminder, libraries compiled with 0.6.x cannot be used with Scala.js 1.x; they must be republished with 1.x first. Strict floats by default Until 1.8.0, Scala.js used non-strict floats by default. This meant that Float values and their operations were allowed to behave like Doubles; either always, sometimes or never, in unpredictable ways. This was done in the name of run-time performance, because correctly implementing 32-bit float operations requires the built-in function It was considered acceptable because there are few use cases for demanding the reduced precision of Floats, and even the JDK did not mandate strict floating point operations by default. It was always possible to require the Scala.js linker to use strict float semantics using the following setting: Starting with 1.9.0, Scala.js uses strict float semantics by default. The rationale for this change is that several conditions have changed since Scala.js was first designed: - We emit ECMAScript 2015 by default, which requires that - Even when targeting ES 5.1, Math.froundhas been widely available in browsers and other JS engines for years - Java 17 restored strict floating point operations everywhere Using strict floats means that Float values will always be representable with 32 bits, and that Float operations will always follow the IEEE-754 specification for the This should not have any impact except in the following case: x.isInstanceOf[Float]used to return number. It will now return falseif it cannot be represented with a 32-bit float. This change of semantics can in theory break some code, although we do not expect that to happen on any non-contrived example. Since using strict float semantics is a link-time decision, it applies to the whole program. Therefore, the changes are also applied to the code in libraries that you may use. The switch to strict floats may have a slight performance impact on the isInstanceOf[Float] operation, although benchmarks suggest that it is negligible. There will be a performance impact if you are targeting a JS engine that does not support Math.fround (such as Internet Explorer), which implies emitting ES 5.1 code. On such engines, Scala.js uses its own version of fround in user land (a so-called polyfill). We have optimized our fround polyfill to the greatest extent possible, but benchmarks still suggest that Float-intensive applications can experience up to a 4x performance hit. Reverting to non-strict float semantics If you really want to, you can switch back to non-strict float semantics with the following linker setting: This is however deprecated, and will eventually cease to have any effect in a later major or minor version of Scala.js. New JDK APIs The following class was added (thanks to @er1c): The following methods were added (thanks to @armanbilge): New ECMAScript APIs The following methods of js.BigInt were added (thanks to @catap): Upgrade to GCC v20220202 We upgraded to the Google Closure Compiler v20220202. Among others, the following bugs have been fixed in 1.9.0: - #4616 Calling SmallestModulesmodule split style causes reference error - #4621 The side effects of JS binary (and unary) operators can be lost .knownSizealways return 0. You can find the full list on GitHub.
OPCFW_CODE
The recent technological leap we can observe doesn’t look extraordinary to us. It began about 50 years ago, and each decade the speed increases. We have become used to complex technologies, compound gears, and multi-step solutions. Nevertheless, one of the drivers of progress is the simplification of existing processes, because it leads to one of the major business benefits - costs reduction. Developers, especially those who have a direct impact on a production process, streamline existing operational mechanisms. Even when working with data that requires multilevel operations, as in my example - in case of programming in Java. Learning how to simplify your work is a skill possessed by advanced specialists. I’d like to provide just one example from my own experience with some conclusions I reached afterwards. Struggling With Java Complexity Java people and clients find the benefits of the language obvious – from the beginning Java had a nice database interface, probably the best one at the time and still the best one even now. However, even simple operations required wordy, heavy requests and complicated sequences of actions, not too complicated, but inconvenient for sure. Working with an ORM system (a programming technique for converting data between incompatible type systems in object-oriented programming languages - Wikipedia) for Java – a framework called Hibernate, I thought that I’d found the easiest, best documented, and if I may say so, ideologically correct, way for me at that time (and it seems that not only for me) for interaction with databases. Unfortunately, tiresome work with codes makes ORM and Hibernate party poopers. Why I don’t like ORM anymore? A long learning curve Hibernate documentation is not that big and seems like it is easy reading. But based on my experience interviewing more than 50 Java developers, I can confidently say that only 3 of these Hibernate developers were able to answer some advanced questions about this framework. HQL – an additional language to learn It’s not enough to know SQL to use Hibernate, you have to learn HQL, which is rather simple, but its semantic differs from SQL’s. There are also some miscellaneous issues such as the large amount of libraries to be included, some overheads at the start, more load on the CPU, etc. It’s easy to shoot yourself in the leg - the existing human factor It’s easy to forget something, to have a lack of knowledge, and so on. Frankly speaking, completely innocent things can suddenly cause strange side effects, as it often happens. This list is endless But is there something regular developers can use instead? Right now Spring JDBC seems to be a good replacement for ORM, being a light, simple, and stateless JDBC abstraction, enabling us to write database requests and obtain results with a single line of code. Operations with system process require fewer commands to be written and processed. Everything is very compact, functional, and with practically zero overhead. By choosing this method of interacting with databases I’ve not only decreased the time required, but significantly reduced the development costs (time equals money). Simple and Affordable Things My experience shows me that the time you spend searching for simple development solutions; new tools, frameworks, technologies, will totally pay off in the future. As a developer or consumer, while doing a risk calculation or costs forecast, don’t forget to make an analysis of all available options that can solve your problem. Companies that provide different industries, from Financial Services and Media to Healthcare and Travel, time-to-market development with the help of highly skilled software engineers, like DataArt does, have the ability to choose the correct options of up-to-date technologies and services. Nevertheless, while trying to be a pursuit in terms of costs and time reduction, remember what proverb says: “The cheap things proven to be expensive at the end”.
OPCFW_CODE
I still see quite often that RJ45 DB9 console cable is attached with some devices, when I do not even have a computer with old serial port. So I need to either get the USB - DB9 adapter or USB - RJ45/Console cable separately. Is it just that it takes time to move away from the old serial or I am missing something? Why is RJ45 DB9 console cable still used, when all the latest computer do not have serial port? [closed] Why the down vote?– Maksim LuzikSep 18, 2014 at 8:20 1Because you've not read serverfault.com/help/dont-ask .– Jenny DSep 18, 2014 at 10:53 This question isn't answerable by us but rather by the manufacturer of said equipment/cable.– joeqwertySep 18, 2014 at 14:07 Good computers have serial ports. Adequate computers support USB to Serial adapters.– Tom O'ConnorSep 18, 2014 at 22:27 Take a counter-example. We have some equipment that has a mini-USB serial port (or rather, a RS-232 signalling protocol carried over a Mini-USB socket/plug; thus not really USB per-se. This is a big pain in the proverbial, as trying to find a mini-USB adaptor was quite challenging (particularly on the local market). We ended up having to order the particular adaptor from the vendor. If they had used micro-USB, I could have made an appropriate cable myself (although this would still require acquiring a micro-USB plug, soldering etc.) At least with a RJ45, crimping equipment is widely available so I don't need to worry quite so much about the physical adaptation. Note that RS-232 etc. don't specify the physical adaption; DB-25 was common around the time of the dial-modem, then the smaller DB-9, and then RJ-45. It's worth noting that RJ-45 also has much greater cost efficiencies, and with a small adaptor cable (easily are yourself), you could turn a regular Ethernet cable into an appropriate rollover serial-cable (much to the relief of tech-support personnel at the time) So, to answer your question: - cost of RJ-45 socket is very cheap - cost and available of cable components is low and easy - USB miniaturisation is an on-going process and makes it harder to integrate. - also, extra circuitry would be required to drive the line-logic and implement USB-serial profile on the device; rising cost. Such cost is pointless anyway as serial-interfaces are increasingly de-emphasised for day-to-day configuration. (Which makes it all the more important to find a suitable cable, or to be able to make one with materials at-hand when things are going a bit pear-shaped. Thanks for the answer Cameron. I am not sure if my questions was really understood. I am fine with the RJ-45 socket on the devices, but what I do not understand is why the cables have serial connector on the other side, when serial connector is dying. There seem to be already RJ45 to USB cables, but the RJ45 to Serial cables are still quite popular. Do most of the network administrators still use then RJ45 to serial cables with computers that have serial port available? Sep 18, 2014 at 10:36 Although I'm sure you could find an example of a RJ-45 rollover cable (suitable for Cisco) to USB [with USB to serial microcontroller embedded into it], I think most people just carry a USB to DB-9 adaptor (although I think they get pretty small these days). Compatibility has been known to be an issue with cheaper devices (particularly with regard to RTS timing in some versions of Windows). In my experience (not that I do network admin for a job), this is what I have experienced / witnessed. Sep 18, 2014 at 12:03 Who said "serial connector is dying"? I'm sorry you didn't bother to buy equipment with an RS-232 port, but it's alive, well, and useful. Sep 18, 2014 at 20:33
OPCFW_CODE
[Package Issue]: Microsoft.Office - Installer hash does not match Please confirm these before moving forward [X] I have searched for my issue and not found a work-in-progress/duplicate/resolved issue. [X] I have not been informed if the issue is resolved in a preview version of the winget client. Category of the issue Installer hash mismatch. Brief description of your issue Tries to install version 16.0.16731.20398 Installer is version 16.0.17126.20132 winget hash .\setup.exe InstallerSha256: b67903f245e766687de7233133a68dfe4efdb9892e9f73f94adf69800d74f4d5 Steps to reproduce winget install --id Microsoft.Office Installer hash does not match Actual behavior Does not install. Expected behavior Should install App. Environment winget --info Windows-Paket-Manager v1.6.3482 Copyright (c) Microsoft Corporation. Alle Rechte vorbehalten. Windows: Windows.Desktop v10.0.19045.3930 Systemarchitektur: X64 Paket: Microsoft.DesktopAppInstaller v1.21.3482.0 Screenshots and Logs No response Did the vendor roll the version back? We might need to remove this version. Hello, I'm a bit confused by your issue here. I'm not getting the hash mismatch for me. @FLX90 - This issue has already resolved by #133497. Could you please re-validate it again? 19:45:12 D:\Repository 31ms pwsh> winget search --id Microsoft.Office 名称 ID 版本 源 ----------------------------------------------------------------------------------------- Microsoft 365 Apps for enterprise Microsoft.Office 16.0.17126.20132 winget Office Deployment Tool Microsoft.OfficeDeploymentTool 16.0.16731.20398 winget 19:45:43 D:\Repository 3.865s pwsh> winget show --id Microsoft.Office 已找到 Microsoft 365 Apps for enterprise [Microsoft.Office] 版本: 16.0.17126.20132 发布者: Microsoft Corporation 发布服务器 URL: https://www.microsoft.com 发布服务器支持 URL: https://support.office.com 作者: Microsoft Corporation 绰号: office 描述: Microsoft 365 Apps for enterprise is the most productive and most secure Office experience for enterprises, allowing your teams to work together seamlessly from anywhere, anytime. 主页: https://www.office.com 许可证: Commercial 许可证 URL: https://docs.microsoft.com/legal/termsofuse 隐私 URL: https://privacy.microsoft.com 版权所有: Copyright (C) Microsoft Corporation 版权 URL: https://docs.microsoft.com/legal/termsofuse 标记: access excel m365 microsoft-office microsoft365 o365 office365 onedrive onedrive-for-business onenote outlook powerpoint proplus publisher skype-for-business word 安装: 安装程序类型: exe 安装程序 URL: https://officecdn.microsoft.com/pr/wsus/setup.exe 安装程序 SHA256: b67903f245e766687de7233133a68dfe4efdb9892e9f73f94adf69800d74f4d5 Now it works. After "winget source update" I also had the hash mismatch.
GITHUB_ARCHIVE
Search Mailing List Archives [liberationtech] AdLeaks - a whistleblowing platform Fabio Pietrosanti (naif) lists at infosecurity.ch Mon Jun 24 02:04:29 PDT 2013 Il 6/23/13 2:53 PM, Jens Christian Hillerup ha scritto: > Quickly noting that I'm not affiliated with AdLeaks, just passing on > the information. > On Sun, Jun 23, 2013 at 1:56 PM, Andrea St <andst7 at gmail.com > <mailto:andst7 at gmail.com>> wrote: > it sounds different from globaleaks project. Am i right? > Yes. GlobaLeaks seeks to establish an open-source version of the > submission system of Wikileaks such that any and everyone can make > their own leaks site. The core development team of GlobaLeaks is also > on this list, so I'll let them describe it further. GlobaLeaks mission is to be a framework with support for different digital whistleblowing workflow and security threat model. The AdLeaks concept is very cool (http://arxiv.org/abs/1301.6263), even if it appear to me very difficult to be deployed and used in a real See 6.1 (submission duration), it would keep the whistleblower 21 days to upload a single 2MB file. Passive traffic analysis with correlation of timing/size/destination is *extremely difficult and unlikely* to be easy to be protected without "awareness and actions of the whistleblower" (like using an open wifi, an internet caffè, using Tor from another persons communication line, etc) . For a whistleblowing project we're working on, we are going to develop a Widget to support covert-traffic generation: This will work with inclusion into the websites of all the partners's website of this whistleblowing inititives. This "does not guarantee protection to the whistleblower" doing submission. Our widget for covert-traffic is specifically designed only to provide some "additional aid" in some specific case we've discussed (and that should be better documented in TM). It help for Whistleblowers that access a submission site from their corporate/governmental networks, trough proxy servers that save detailed access logs. In context where Whistleblowers are prevented from doing a submission (because hind a proxy) but can access it. In such context the WB will leave trace that maybe interpreted like "he intended to do a submission, but then he haven't done" . If in the Enterprise/Government organization's proxy logs, there are traces of thousands of users connecting to the submission interface (due to the Widget being embedded in third party popular websites), there will not be a single, incriminating "log entry" generated by the unaware/unconscious whistleblower, but thousands of them making slightly more difficult the analysis. Supporting covert-traffic generation it's something that "help", but doesn't fix the real problem that i think *require* Whistleblower awareness. Anyhow i'm excited to meet at OHM2013 the AdLeaks team and do a brainstorming on it! :) Fabio Pietrosanti (naif) HERMES - Center for Transparency and Digital Human Rights http://logioshermes.org - http://globaleaks.org - http://tor2web.org -------------- next part -------------- An HTML attachment was scrubbed... More information about the liberationtech
OPCFW_CODE
Let me address the things mentioned in the article: No data is ever sent or received to or from our servers in plain text. Due to a bug in our third-party network library the certificates were not being verified so a self signed certificate could decrypt the data. This issue has been addressed in an update waiting review at Apple. Users' passwords are hashed before we store them in our databases (pbkdf2, salt, multiple iterations). Our user's address books are not stored on our servers and only used temporarily to help us find your friends. It was a mistake to not hash the contents of the address book before sending to our servers and we are currently changing the client application so it hashes the address book contents before sending to our servers. Sensitive user data was exposed in certain endpoints (although only accessible for authenticated users). We have already addressed this issue in a server deployment and the hotfix is live now. We are currently wading through inboxes looking for Kyle’s outreach. It looks like it may not have reached the core server developers. Please contact me personally at firstname.lastname@example.org if you have questions. Finally I want to thank Kyle Richter for working out our security holes, small and large. We’re currently reviewing our endpoints and codebase to further harden security and ensure the privacy of our users. From the home page "Play against friends in real time": this is a false advertising at best. Also, is it written anywhere that people can play against bots? "We're sorry" would have been a better start I think most users with half of a brain can figure out that not all matches are real time. If you challenge a friend, it clearly tells you that you can play the match without them and they can play against "you" when they get around to it. A really acknowledging response was given -- would those words really make that much of a difference? In QuizUp you are playing a human in real time in almost every game. In the off chance we cannot find an opponent (which is becoming very rare due to our popularity) you may be pitted against a bot as a fallback strategy. Matchmaking is a hard technical problem, and we have chosen to maximize gameplay experience and consistency. I’m happy to share that the ratio of ghost games to real ones is getting very small! Hopefully we will be able to phase them out completely in the future. There is no cost for a faceless company to be 'sorry', and only prmotes the further unethical actions by other companies. I would rather see them pay the fine for privacy breach. Moreover, this all goes down to the apps requiring ALL permissions to run, why is that acceptable? Why is QuizUp allowed to see user's location in first place? To me, it feels like making stalker's life easier than ever. Make an app displaying cats, set it require full permissions, put on App Store. You should NOT be sending such sensitive information on other users, encrypted or not. Unless of course you want to continue this trend of violating your user's privacy. The CA can also be provided in a .mobileprofile, installable through email. It also validates as a legitimate certificate, unless the app is looking for a particular certificate, which I think is rare. So just for the record, are these all of the actual issues? - no SSL verification means it's trivial to MITM - exposure of other player's emails/bio/birthday/location/exif data in pics - address book data is sent unhashed to the server - signup emails expose the cleartext password (is this right?) But the way I understand it, there's no reason or way to protect the client from the user him/herself - custom CA install, decompilation, etc are all ways for the user to get to their own data, or their own communication with the server. So I'm a bit at a loss why the TC article is hammering on the "… and the local file which contained user information did not require any decryption to read." The OP also mentions the FB tokens being exposed and such - I'm assuming these are only sent over SSL, and other people won't have access to it (with the caveat of the SSL fix), right? - We haven’t stripped EXIF data from uploaded pictures, although this is on the roadmap. Sensitive fields from user profiles have been stripped from all endpoints. This was done before the news hit TechCrunch. - We were never saving contact lists, just using to cross reference our user database. In the next update we will compare hashes, not plain text emails. - No passwords are ever stored in plain text, but they are transmitted over SSL during signup and login. We are considering ways to further obfuscate this, but strengthening SSL goes a long way. Please contact me at email@example.com if you have comments or questions about our password policy. You are right about the Facebook access tokens. The tokens are sent over SSL and we are not breaking any usage guidelines from Facebook. Access tokens can of course be invalidated by the user, or by Facebook. We are open to further enhancing the security of our OAuth flow, but currently it has not been exposed to any security weaknesses. Finally, now the TC article makes sense. Someone told me the guy that found the exploit works for one of your competitors. ;). Recording single player games and then sending them to other users to work as fake real time multiplayer games seems like a very clever move and is probably the reason this game is doing so well. Not that I have heard of it before this post, though. It's a good hack that capitalizes on the way a quiz game works and doesn't have any real differences to true real time multiplayer except for the likely lack of real time messaging. The same could be done for any game in which people compete yet do not directly influence each other. The benefits are very clear: reduced matchmaking times, eliminates latency issues, eliminates signal loss issues. All of these are major hurdles to multiplayer cellphone gaming, so I don't doubt that this game would be pretty successful because of it. Sending users data to other users without permission like that feels like it should definitely be a punishable offense, but then the legal system doesn't work on logic so who knows. Tetris Friends does this for their multiplayer games. When you "play against people", what you're really doing is playing against their replays. It's quite clever, and it had me fooled for a while while I was still in college. SongPop does the same thing. What is perhaps the most shocking is QuizUp is backed by several venture capital firms, including some very large and well known ones. The question I have is: did they not do their due diligence when vetting this software or did they not care. I am not sure which one is more alarming to me, and it doesn’t really matter either way. Is this a sign of a bubble when a company can raise millions of dollars with so little care put into its technology or development? This, sadly, should be of surprise to no-one. So yeah, "fuck it, ship it" seems to be more or less the standard. However, if you are VC funded with 10 engineers on the team, this is inexcusable. The only kind of salt... If you are using scrypt with a reasonable difficulty and a per-user salt, there is no reason to put the entropy restrictions, weak password restrictions, etc on your end-users. It is painful to interact with sites that enforce ridiculous password requirements. You can get away with a 4 character password on Netflix. There is a reason for that. Security is much more subtle that password complexity. No, I really am not. But as I didn't describe my reasons, you don't have the context to understand them. Frankly, if Netflix has 4-character passwords, I would expect it to be relatively easy to compromise their accounts live with a carefully put together campaign. If Netflix gets their username/pw database dumped, I expect we'll see their policy change as the passwords are trivially cracked. Not only that, putting together a safe & sane password retry system isn't the easiest thing every, and doing careful fraud detection based on geolocation/ip etc isn't the easist thing ever either. Particularly when I don't have someone working full-time on security. Further, what you also didn't know is that the password strength functions as written have knobs I can adjust if things are too onerous. So having harder passwords goes a long way towards 'better security' on the account side for little effort. I would advise you to be more cautious about making unsubstantiated statements based on ignorance in the future. But I agree, there's a huge difference between just not being able to implement security and not considering it relevant. To me, this is clearly a sign of the latter. We don't really know what it means to write good code. We can't measure it. We can barely talk about it meaningfully. It's rarely taught except via osmosis: you pair with someone more experienced and you read cargo cult blog posts. btw, combination of both is probably better If the only diploma you hold is from some backwoods high school, you've got a bit more to prove. Doesn't detract from your excellent piece or put Path in a better light, but that's the context you're referring to there. Shoot First, Address Questions Later. I bet this kind of decisions are a consequence of MBA/Excel mindset. Developing software properly takes time and money and that isn't... lean (lol) and doesn't drive billion dollar valuations.
OPCFW_CODE
“ Ride”, “journey”, “adventure”; the common choice of words, people use to describe their life. My first choice would be “experience”, a more abstract word, which encompasses a wide array of emotions. But being the creature of habit that we are, I would end up using a journey, ride more often than not. If you are starting to wonder, is this going to be just another travelogue? you wouldn’t be entirely wrong. But there will be a random mix of everything, from existential contemplation to all things trivial. If I put the thoughts on the genre to rest momentarily, the main question arises. Where do I begin? And what I do begin with? The choice of not starting is out of the question if in case someone’s (It’s me) wondering. Is the first step of any initiative, the most difficult one? The first impression is the best, they say and voila there’s pressure piling over several layers of apprehension. Say I start something, does it get easier? No, that’s a different game of complexity and topic of discussion for another day. Let’s stick to the beginning as such. What makes it so complicated? Is it because we associate moments of significance to it. Say the big bang, the first form of life, birth (despite the frequency). Is it because it sets forth a chain of actions beyond our prophesying capabilities? (things will never be the same factor) When did I start wanting to write? How did this idea originate? or does it? Is it only about the discovery of an already existing idea hibernating in the complicated network of nerve cells? A multitude of doubts arises in my mind when posed with a simple question. How long has the idea of writing existed in me? As long as I have existed or is it something I picked up along the way? (like an infection from a virus of the modern era). Then how long have I existed? as long as I have been breathing. Am I only a culmination of actions & reactions during this specific period? Scientists have been trying to decode human nature for a long time. What does the study of genes convey? That says we inherit a part of memory through DNA. This memory is the repertoire of information that has been passed on to us, which is a result of the experiences of our ancestors over several generations; which could go back as early as the origin of species. So long ago, right! Yeah, a part of me has experience from the past and the future generation will carry a part of the memory that is acquired during this part of the journey as we try to evolve into a better species (Are we really?). Are we exercising our privilege to access that memory to a greater extent? I don’t know. Even if we are, would we put that to good use? I will never know! Who am I then? Who started it? “I” sounds absurd, if one starts to think of life in terms of experience. I will stop here, rather momentarily but only after quoting T.S.Eliot – “We shall not cease from exploration, and the end of all our exploring will be to arrive where we started and know the place for the first time.”
OPCFW_CODE
After Test Summary I passed. For me, this was the easiest Salesforce certification I’ve taken so far. The test had multiple questions regarding Outbound Messages, Salesforce connect, and surprisingly Web-To-Lead. Here’s the Integration Designer study guide I prepared for myself. I’ve done numerous integrations and built a Bulk API Starter project so this guide contains things I wasn’t very knowledgeable in. If you’re thinking about taking this exam, make sure you’ve done a few real integrations first. One thing I find lacking in these materials is integration error handling and different ways to handle that. - Integration Patterns and Practices - Salesforce Integration Architecture - Webservice API Video - Always A Blezard - Salesforce Memo - Salesforce Ben - Exploring on Salesforce - Cory Cowgill - Salesforce Integration Capabilities – 28% - Salesforce Integration Patterns – 17% - Enterprise Integration Architecture Concepts – 15% - Salesforce Integration Testing – 10% - Integrating with Force.com: Security – 15% - Tools – 10% - Monitoring – 5% Foundational Platform Integration Points - Creating and exposing web services using Apex - Invoking external web services from Apex - Outbound messaging for invoking external web services when data changes - HTTP and REST integration - Email integration for inbound and outbound messaging - Force.com SOAP APIs - Syndication feeds via Force.com Sites - Replication API - getDeleted – Retrieves the list of records that have been deleted within the given timespan for a specified object - getUpdated – Retrieves the list of records that have been added or changed during a specified timespan for the specified object. - These API calls return a set of IDs for records that have been added, updated, or deleted as well as the timestamp in UTC indicating when they were last updated or deleted. - Authentication, Network, and session security - How is user authenticated? - For how long is session valid? - Data Security - How is access to data regulated? - Transport layer security - How is communication secured? - Can be used within a workflow rule or approval process - Sent asynchronously - Reliable – 24-hour retry for failed messages - Support HTTP/S - Supports X.509 client certificates for 2-way SSL authentication - Send from Salesforce.com IP addresses - Outbound messages contain Organization ID - Two-way communication can be achieved using a callback - Outbound message can contain the enterprise or partner URL and session ID token Native Salesforce feature that allows one org to share data with another org. Once enabled it can’t be disabled. After two orgs have enabled Salesforce-To-Salesforce, they have to establish a connection. A connection is established by sending an invite email and the receiver accepting it using the given link. After connection established, each party can publish the objects the other party can subscribe to. Most standard objects and all custom objects are available. When subscribing, one can decide to Auto-Accept records per object or not. If not, one has to approve the inbound records before they’re available. Junction objects are auto-accepted and child records are auto-accepted if their parent records are accepted. One has to map the fields from the published object to their fields too. Field Mapping Considerations - Data Type Matching – Only matching data type fields can be mapped. - Field Visibility – Lookup IDs are not available for publishing. You can enable S@S for those fields with formulas. Records are shared either manually or programmatically. Common Integration Architectures Each System is connected to every other system through a direct integration. Easy to implement with only a few systems. Difficult to scale. Hub and Spoke Every system connects to the hub. All data transfer is done through the hub. Easy to design and implement. - Architectures are proprietary in nature - Single point of failure - Inability to support large transaction volumes Enterprise Service Bus Distributed services architecture. Employs distributed adapters. Highly scalable. Computing software that functions as an intermediate layer between systems - ETL (Extract, Tranform, Load) - Data Cleansing - Process Management Remote Process Invocation – Request and Reply Scenario: Salesforce invokes a process on a remote system, waits for completion of that process, and then tracks state based on the response from the system. Remote Process Invocation – Fire and Forget Scenario: Salesforce invokes a process in a remote system but doesn’t wait for completion of the process. Instead, the remote process receives and acknowledges the request and then hands off control back to Salesforce. Batch Data Synchronization Scenario: Data stored in Lightning platform should be created or refreshed to reflect updates from an external system, and when changes from Lightning platform should be sent to an external system. Updates in either direction are done in a batch manner. Scenario: Data stored in Lightning platform is created, retrieved, updated, or deleted by a remote system. UI Update Based on Data Changes Scenario: Salesforce user interface must be automatically updated as a result of changes to Salesforce data. Pattern Selection Matrix Exposes a near real-time stream of data from the platform. Notifications can be sent to - Salesforce pages - Application Servers outside Salesforce - External Clients - Applications that need to poll against Salesforce data frequently - Near real-time notifications Push Technology / Pub/Sub - Updates performed by the Bulk API won’t generate notifications, since such updates could flood a channel. - Evemts may generat a notification, but it is not guaranteed. - Unsupported Queries - Queries without an ID in the selected fields list - Queries with relationships - If a Salesforce application server is stopped, - All the messages being processed but not yet sent are lost - Client must reconnect and subscribe to the topic channel to receive notifications - Clients only receive notifications when a subscriptions and connection are active. Chatter REST API REST API for integrating with Chatter. - Pre-aggregration of data from different objects - Data automatically localized to the user’s time zone and language - Built-in Pagination - Structured for rendering on websites and mobile devices - Provides easy relationship traversal - Requesting a News Feed - Updating the User’s status - Inserting a Post with @mention – Mention id has to be specified in messageSegments - User Authentication - Security Token - Two-Factor Authentication - Network Authentication - determines when and from where a user can log in. - Login Hours and IP Ranges in a user’s profile - Org-Wide Trusted IP Address list. - determines when and from where a user can log in. - Session Security - Data Security - Standard APIs follow regular Object-Level, Field-Level and Record-Level security. - Application-Level Security - API Client Whitelisting restricts all client application access until explicitly defined by the administrator. - Client applications that are not configured as connected apps are denied access. - Includes Data Loader, Salesforce 1, Workbench, and Force.com Migration Tool - Users whose profile or permission set has the “Use the API Client” permission may access any connected app. - Contact Salesforce to enable API Client whitelisting. - Transport Layer Security - Two-way TLS - Both the client and server present a certificate to prove their identity to the other party. - A mutually trusted certificate authority signs the certificate establishing the trust between the two parties. - Outbound port restrictions - Port 80: HTTP only - Port 443: HTTPS only - Ports 10244-66535 inclusive: HTTP or HTTPS - Remote Site registration - A remote site setting is needed before Apex is allowed to callout to an external system. - Named Credentials - A named credential specifies the URL of a callout endpoint and its required authentication parameters in one definition. - Supported Callout Types - Apex Callouts - Salesforce Connect: OData 2.0 - Salesforce Connect: OData 4.0 - Salesforce Connect: Custom (developed with the Apex Connector framework) External Object Relationships |Relationship||Allowed Child Objects||Allowed Parent Objects||Parent Field for Matching Records| |The 18-character Salesforce record ID| |External||The External ID standard field| |You select a custom field with the External ID and Unique attributes|
OPCFW_CODE
RE 4 Crashes on logo/after initializing re2_framework_log.txt im using pd-upscaler branch and fsr 2.2 version of pd-upscaler im using for RE 4 https://github.com/praydog/REFramework/actions/runs/4909144719 I'm assuming you're playing on the cracked version. It is not officially supported and known to crash. I'm assuming you're playing on the cracked version. It is not officially supported and known to crash. oh i see i just tried it out on a cracked version thank you for letting me know try this: https://github.com/hect0x7/REFramework/actions/runs/4976261746 @Karlsonnnn try this: https://github.com/hect0x7/REFramework/actions/runs/4976261746 @Karlsonnnn It works now, thanks. Any chance to get a fix for the upscaler branch? try this: https://github.com/hect0x7/REFramework/actions/runs/4976261746 @Karlsonnnn It works now, thanks. Any chance to get a fix for the upscaler branch? As I am not using DLSS, I am not sure if it will work and have no way of testing it properly. Could you please try replacing the official dinput8.dll and dinput8.pdb files with the ones found here (https://github.com/hect0x7/REFramework/actions/runs/4976601426), and test if it works? Remember to backup original files. If it does not work, I may not know how to fix further, and might need help from the developer. hi exams just finished ill try it out if its any good thank you! try this: https://github.com/hect0x7/REFramework/actions/runs/4976261746 @Karlsonnnn It works now, thanks. Any chance to get a fix for the upscaler branch? As I am not using DLSS, I am not sure if it will work and have no way of testing it properly. Could you please try replacing the official dinput8.dll and dinput8.pdb files with the ones found here (https://github.com/hect0x7/REFramework/actions/runs/4976942629), and test if it works? Remember to backup original files. If it does not work, I may not know how to fix further, and might need help from the developer. Hi it is indeed working and im using fsr 2.2 with this thank you very much! try this: https://github.com/hect0x7/REFramework/actions/runs/4976261746 @Karlsonnnn It works now, thanks. Any chance to get a fix for the upscaler branch? As I am not using DLSS, I am not sure if it will work and have no way of testing it properly. Could you please try replacing the official dinput8.dll and dinput8.pdb files with the ones found here (https://github.com/hect0x7/REFramework/actions/runs/4976942629), and test if it works? Remember to backup original files. If it does not work, I may not know how to fix further, and might need help from the developer. it works fine until u use fsr 2.2 and temporal upscaler and crashes the same way. nonetheless thank u for the fix since i can change my fov thank you! Same with DLSS. I tested 2.5.1 and 3.1.1 It crashes on loading I notice that someone seems to have fixed the dlss compatibility issue: https://github.com/yobson1/REFramework/releases/tag/empress I notice that someone seems to have fixed the dlss compatibility issue: https://github.com/yobson1/REFramework/releases/tag/empress It does work thank you for helping me out and for the link tysm!
GITHUB_ARCHIVE
Acquiring bewildered whilst Finding out many of the appropriate concepts for the Java assignment? We will help you end your programming assignment on Java with Experienced help. Whether it’s to get a customer server or according to a GUI, our staff can help you save the day with their useful services. The two layout patterns are fundamentally unique. Nonetheless, once you master them for The 1st time, you will note a perplexing similarity. So that it'll make tougher for yourself to know them. But in the event you proceed to check finally, you'll get scared of design patterns far too. We have been definitely sure to our ethics and we promise that we are not in this article to make money but to provide you Programming help initially. The underside line is the fact that only a little percentage of developers know how to structure a truly object oriented method." Do you have to produce a project on Ajax, Python or Perl? Or is your class focused on normal languages like C++ or Java? We've gurus who will aid you with everything. Visible primary, Matlab, Pascal—the listing goes on, and whatever distinct location you'll need guidance with, you can depend on us to assign a person able to assist you. I are already establishing software package skillfully for twenty years. I've worked for quite a few fortune one hundred providers such as Time Warner Inc. > For starters, let me show you really frankly. Our Do my programming Homework Help service is specifically for students. Preserving their worries in your mind we usually care about their pocket. Just take it easy Our services are pocket-welcoming and we don't charge an extra penny. With us, the U.S. students will be able to get gives and discount rates on programming help assignment costs around the 12 months. To be able to make learners far more comfortable with our programming assignment help services, we provide more features and bargains. ideas can be used to forcefully guide the system to be produced in how framework architect’s required it for being architected initially. Thank you for Discover More Subscribe us. You are going to receive a confirmation electronic mail Soon within your subscribe e mail tackle. There was no way I could end that paper in time. As a result of MyAssignmenthelp.com, I effectively submitted a perfectly penned paper throughout the deadline. However the astonishingly their price ranges are genuinely very low. It can be kind of hard click now to believe. But it's correct In fact. These need to be viewed as experimental. Depending on the distinct e-book reader which you use, there could be issues click for more info with rendering of extended traces in system code sample. You might notice that lines which might be much too very long to suit across your monitor are incorrectly break up into various traces, or the part that extends beyond the correct click site margin basics is just dropped. In a fancy n-tier distributed program the MVC architecture place the critical position of organizing the presentation tier on the technique. Audio chat courses or VoIP computer software could be helpful when the display screen sharing software package would not provide two-way audio capability. Use of headsets hold the programmers' hands no cost
OPCFW_CODE
It is a great article, one that invokes mixed feelings. The article talks against rewriting (large scale) software…..from scratch. Joel was kind enough to consider all those who write software as true programmers; people who give enough thought and not just code up something that works. However, it is far different in the real world. That said, I am neither completely in disagreement with Joel nor am I advocating to rewrite large scale software once the code is identified as a mess. Most people who are programmers are just people who write code for a living. Nothing wrong but forget the passion part of it. So the quality of the code that is generated is questionable. True programmers are different. They first build those intangible constructs in mind of how the code should be, and then they write code that reflects to the reader the intent of the task being achieved. Hence such code is readable, modular and maintainable. They are deliberate in building such code so that it is possible to implement new features (and do bug fixes, if any) in such code with ease for a fellow programmer too. That’s when Joel’s statement that programmers, in their hearts, are architects applies. I agree that rewriting the whole software from scratch is an ambitious attempt, and might end up foolish too. Nobody, in their sane mind, would attempt that without a backup plan. However those functions with over 200 lines of code, which ideally would be some ten lines of code, could be rewritten. I don’t agree that those functions, which clearly would have been written without forethought, with its tons of bug fixes carry loads of knowledge. They mislead the reader from understanding what is actually been achieved in the function. I would argue that most of those bugs should not have been there in the first place. They come into existence because of the programmer’s lack of due diligence when implementing the function. In simple words, programming is an art and not a contest of (fast) typewriting. I have seen my share of messy unmaintainable code; code that is just as dead with its smell or that just runs at the mercy of time! For instance, consider this piece of MFC code….. int controlId = reinterpret_cast(this); The above code assumes that there is an user control with an id equal to the this pointer, where the current object was some MFC form/window. That ugly code was there in production, and was running purely on luck; may be that code path was not used or God knows! I have seen programmers use std::string* for creating an array of strings, and the associated memory never properly released – a deliberate memory leak for free. They couldn’t accept that it could be replaced with std::vector. I have also seen mightier – std::list***; perhaps the programmer was programming for Einstein trying to think more than three dimensions. And never bother to think if the memory management was in place for that. When I take this for a discussion with the programmer who wrote that crap of code, I get a soft slap on the face, why bother, it works, right?. Worse to accept is the possessiveness that they have for the code; they won’t allow the code to change (to be improved). Although threads in .NET are managed, programmers don’t think even once before creating threads just for firing events or such, which they should have achieved using the ThreadPool. And they create it without holding a reference – bad practice. They don’t get a grasp of the line, Thread is an expensive resource. (See also To Hold or Not to Hold – A story on Thread references) To find out and to live with the architectural/design problems in code is another beast. In most environments, the way to sort it out goes political. Unfortunately, programmers invent new ways to make the code even more worse in order to make up for the code leaks across software layers. Besides, how many times programmers, because of deadlines or their reluctance, fail to build an object oriented design, and end up writing humongous functions with whole bunch of discrete parameters. How many times doesn’t a programmer spend long hours just trying to get a hang of what the code does and why it was authored a such way? Don’t the comments in the code read what the code does, rather than why? And what about code with misleading comments? Bad messy code comes in several such forms and more, and we see them around all the time. It has a lot of impact on the day to day life of a programmer, of which the foremost I would say is maintenance. Maintaining bad code without the hope of ever improving it makes every day of a programmer painful and worthless. Unfortunately, finding and fixing tough bugs in such code is accounted as knowledge. It is nothing but the legacy of the author who wrote that bad code. This knowledge is not helpful when it comes to building/modeling software, even if it has to be rewritten. So let me call it refactoring instead of rewriting so that I don’t sound against Joel’s wonderful post. I have refactored whole modules because they were truly messy and unreadable, completely misleading the reader from understanding what it was actually doing. Because they were not integrating well with other modules in the system and causing havoc all around. When you fix one bug in that code, it spawned one more or more. And I have refactored more than once in different projects in full production code. I know I sound blind and ambitious. Still…….I would argue that it is possible to refactor without side effects and replace those worthless piece of code with something better. I say something better, not to be safe but because opinion differs. But that something better, I hope will save the life of a true programmer who is wasting life shoveling such code with no hope of cleaning it some day. You could even start with naming functions and variables appropriately, and making code modular within a file or class. You could build test suites of expected behavior and start changing code inside out. Form a project wide group to detect and prevent code and functionality duplication, and build SDKs for project wide use. Although…..probably, in Joel’s terms, one at a time. After all, it is all code that a programmer shovels everyday. Is it unfair of a real programmer to expect that the code be clean, readable and maintainable? And best of all be in a way that a fellow or new programmer to the team could understand it even after the original author of the code is long gone. All it needs to rewrite (refactor) something is a vision of how better can you get the code to, a bit of courage to tackle dirty politics, and speed of execution. If you have at least the above three things, it is possible to clean up those 200 or two page(s) functions with a cleaner readable modular maintainable version of it. That in an industrial environment, which almost always is political, is a noble cause. I think it is not only the duty but the charm of a real programmer! With regard to this post, one of my favorite quotes by John D. Cook comes to my mind: The romantic image of an über-programmer is someone who fires up Emacs, types like a machine gun, and delivers a flawless final product from scratch. A more accurate image would be someone who stares quietly into space for a few minutes and then says Hmm. I think I’ve seen something like this before.
OPCFW_CODE
I couldn’t think of a title for this, other than does it work? I am currently a Synology user, that has maxed out on the resources of my NAS. It has enough space, but it can’t handle the plugins I want to run. As such I got a hardware list together bought the parts and am now thinking of options. XPEnology will work for me, but I am not a fan of the fact that it is a bit of a “hack”. I would rather use something that is native. So I look at Rockstor and need to know if this really will work for me. What I need is the NAS to support the following: Plex (Rock-on exists) torrent downloader (rock-on exists) backup to S3/Glacier backup to USB I get that raid 10 is stable, but I can’t afford the disk penalty here. If rockstor / brtfs still has issues recovering from a drive failure then this would be a deal breaker. You “can’t afford” the penalty right now, or are you thinking more long-term? Because one cool feature of BTRFS is that it’s easy to switch RAID configs on the fly. So in my case, I’ve started off with RAID10, just to be safe, and will probably convert it to RAID5 later, as I need the space, and as it’s more stable. On S3/Glacier, I’ve been running rclone on my rockstor box, with a little script. Works great. Hi @Hobo_Joe, welcome to Rockstor community! As you’ve gathered, Rock-on for sickrage does not exist, but you may be able to add one by following this README. I see that some are already succeeding at this, which is great to see. As to the stability of raid5/6, as the consensus goes, it’s not rock solid yet. The link @bdarcus posted describes the progress and current status pretty well, but in summary, things have improved quite a bit as of 4.3 kernel. Personally though, I do use raid5 and raid6 without any basic issues. The big open question for me is whether it’s stable enough to recover from drive failure reliably. While this is mostly in the hands of btrfs developers, a bigger and open problem within our(Rockstor project) reach is to provide a nice alert system that notifies the user about errors, next steps to take etc… We have basic S.M.A.R.T support and savvy users can set this up with some manual work, but we do plan to make this better soon. Regarding USB backup, I need it myself and there’s enough demand from the community as a whole. Here’s the issue for it. Please stay tuned. Rockstor is not completely independent package sitting on CentOS, it would have been a lot easier if that is the case Instead, you can think of it as being weakly dependent. For example, it needs a few special packages including the kernel from elrepo, and our own baked btrfs-progs package among others. So if you run a different kernel, you get a warning on the UI. The other kind of weak dependence is configuration files such as samba, sshd etc… that Rockstor manages. In these cases, Rockstor tries to confine it’s config into clearly marked sections whenever possible. But it could interfere in some cases depending on what you are trying to do around Rockstor. But the key take away is that it is 100% open source and Rockstor does not put any arbitrary restrictions on what you can or cannot do with your system. Same here. But I decided to get my own hardware and then had to choose whether to use XPEnology or rockstor. I got to admit that DSM 6 will be tough to beat. Btrfs, cloud backup, out of the box Let’s encrypt support, … Nonetheless I’ll give rockstor a try now and I’m going to test bdarcus’ rclone backup. Hopefully more and more DSM 6 features will make their way to rockstor as time progresses.
OPCFW_CODE
import KratosMultiphysics from importlib import import_module def CreateSolverByParameters(model: KratosMultiphysics.Model, solver_settings: KratosMultiphysics.Parameters, parallelism: str): filter_type = solver_settings["filter_type"].GetString() # Solvers for OpenMP parallelism if (parallelism == "OpenMP"): if (filter_type == "general_scalar" ): solver_module_name = "helmholtz_scalar_solver" elif (filter_type == "bulk_surface_shape" or filter_type == "general_vector"): solver_module_name = "helmholtz_vector_solver" else: err_msg = 'The requested solver type "' + filter_type + '" is not in the python solvers wrapper\n' err_msg += 'Available options are: "general_scalar", "general_vector", "bulk_surface_shape"' raise Exception(err_msg) # Solvers for MPI parallelism elif (parallelism == "MPI"): if (filter_type == filter_type == "bulk_surface_shape" or filter_type == "general_vector"): solver_module_name = "trilinos_general_vector_filter_solver" elif (filter_type == "general_scalar" ): solver_module_name = "trilinos_general_scalar_filter_solver" else: err_msg = 'The requested solver type "' + filter_type + '" is not in the python solvers wrapper\n' err_msg += 'Available options are: "trilinos_general_vector_filter_solver", "trilinos_general_scalar_filter_solver", "trilinos_shape_filter_solver"' raise Exception(err_msg) else: err_msg = 'The requested parallel type "' + parallelism + '" is not available!\n' err_msg += 'Available options are: "OpenMP", "MPI"' raise Exception(err_msg) module_full = 'KratosMultiphysics.OptimizationApplication.filtering.' + solver_module_name solver = import_module(module_full).CreateSolver(model, solver_settings) return solver def CreateSolver(model: KratosMultiphysics.Model, custom_settings: KratosMultiphysics.Parameters): if not isinstance(model, KratosMultiphysics.Model): raise Exception("input is expected to be provided as a Kratos Model object")# if not isinstance(custom_settings, KratosMultiphysics.Parameters): raise Exception("input is expected to be provided as a Kratos Parameters object") solver_settings = custom_settings["solver_settings"] parallelism = custom_settings["problem_data"]["parallel_type"].GetString() return CreateSolverByParameters(model, solver_settings, parallelism)
STACK_EDU
[dv/chip] Fix xcelium stub_cpu mode issue The stub_cpu mode uses CPU_HIER.clk_i as the cpu_clk to drive tlul transaction. But in tb.sv, the current code force this wire to be 0 if we are in stub_cpu mode. I think the intention is to not force this clock in the stub_cpu mode. This PR fixes issue #15459 Signed-off-by: Cindy Chen<EMAIL_ADDRESS> i'm really confused by this. Is this an issue because in xcelium when we force that clock to 0, it back traces to the CPU_HIER.clk_i and makes that one 0 also? This technically means chip_csr_rw should have failed in xcelium also right? i'm really confused by this. Is this an issue because in xcelium when we force that clock to 0, it back traces to the CPU_HIER.clk_i and makes that one 0 also? This technically means chip_csr_rw should have failed in xcelium also right? Yes! I ran chip_csr_rw, it also failed. I think xcelium did trace back and force CPU_HIER.clk_i to 0. But I am not really sure which sram will stub_mode use? Will this PR create any other issues? so i'm not sure we have this case... but based on what you've observed, if we are NOT in stub mode and also did NOT enable sim sram, this would also force CPU_HIER.clk_i to 0 right? And that would probably cause that particular case to break (even though we don't have it). Do you think it makes sense to change the u_sim_ram clock input to a signal? That way the logic can do something like...sel_sim_ram_clk or something.. and we could mux between CPU_HIER.clk_i and 0, that way we don't have to actually force anything. Sure! That works and sounds better than thia solution. I will update that! Thanks Tim. On Wed, Oct 19, 2022, 11:26 PM tjaychen @.***> wrote: so i'm not sure we have this case... but based on what you've observed, if we are NOT in stub mode and also did NOT enable sim sram, this would also force CPU_HIER.clk_i to 0 right? And that would probably cause that particular case to break (even though we don't have it). Do you think it makes sense to change the u_sim_ram clock input to a signal? That way the logic can do something like...sel_sim_ram_clk or something.. and we could mux between CPU_HIER.clk_i and 0, that way we don't have to actually force anything. — Reply to this email directly, view it on GitHub https://github.com/lowRISC/opentitan/pull/15605#issuecomment-1285006115, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACXPOOMSLBAWOLJ7SLK4OOTWEDQZNANCNFSM6AAAAAARJZIOXI . You are receiving this because you authored the thread.Message ID: @.***> just to double check, did this actually help? And maybe we should ask @timothytrippel about adding chip_csr_rw into xcelium regressions :) just to double check, did this actually help? And maybe we should ask @timothytrippel about adding chip_csr_rw into xcelium regressions :) Thanks Tim. I checked with Tim, so he only picked a few short test to run CI using xcelium here: https://github.com/lowRISC/opentitan/blob/master/hw/top_earlgrey/dv/chip_sim_cfg.hjson#L1444 That is why we did not find xcelium issue in CSR_RW test until then. sounds good. Maybe it makes sense to put at least one stub test in xcelium. We probably are mostly overly focused on C test cases... We can certainly add the chip_csr_rw test to the xcelium regression too. Thanks @cindychip for fixing this!
GITHUB_ARCHIVE
Image annotation is a critical step in training AI models to understand and interpret visual data accurately. It involves labeling specific objects or regions within an image to provide contextual information for machine learning algorithms. Various types of image annotations and techniques are used to annotate different aspects of visual data. In this article, we will provide a comprehensive overview of the different types of image annotations and the techniques used to annotate them. Types of image annotations 1. Image classifications Image classification involves labeling images into predefined categories or classes. It aims to identify and categorize images based on their content. For example, classifying images as “cats” or “dogs” based on their visual features. 2. Object detection & tracking Object detection and tracking annotations focus on identifying and localizing specific objects within an image. This technique involves drawing bounding boxes around objects of interest to provide precise spatial information. Object tracking annotations extend this concept by tracking objects across multiple frames or images. 3. Semantic segmentation Semantic segmentation annotations involve labeling each pixel of an image with a specific class. This technique enables the AI model to understand the boundaries and relationships between different objects within an image. It provides a more detailed understanding of the scene, enabling accurate object recognition and scene understanding. - Bounding boxes (2D & 3D) Bounding box annotations are one of the most common techniques used in image annotation. They involve drawing rectangular boxes around objects of interest. In 2D annotations, the bounding boxes provide information about the object’s position and size within a single image. In 3D annotations, bounding boxes can represent the object’s position, size, and orientation in three-dimensional space. Polygon annotations are used to annotate objects with irregular shapes. Instead of using rectangular bounding boxes, polygons define the exact contours of objects. This technique is commonly employed for objects such as vehicles, buildings, or natural landscapes. Polylines are annotations used to annotate linear objects, such as roads, rivers, or boundaries. Unlike polygons, polylines do not enclose a specific area but rather define the shape and direction of lines. - Semantic segmentation Semantic segmentation annotations assign a class label to each pixel within an image. This technique enables pixel-level understanding and accurate delineation of object boundaries. It is widely used in applications like autonomous driving, medical imaging, and scene understanding. - Keypoint annotations Keypoint annotations involve identifying and labeling specific points of interest within an image. These points represent critical landmarks or features, such as joints in human pose estimation or facial keypoints for emotion recognition. - LiDAR & RADAR LiDAR and RADAR annotations are specific to sensor data annotations in autonomous driving. LiDAR annotations involve labeling point clouds to detect objects and estimate their 3D position, while RADAR annotations are used to annotate radar data for object detection and tracking. Multisensor annotations involve combining annotations from multiple sources, such as images, LiDAR, RADAR, or other sensors. By fusing data from different sensors, a more comprehensive and accurate understanding of the environment can be achieved. Related reading: How data annotation is enhancing machine learning capabilities Understanding the different types of image annotations and annotation techniques is crucial for training AI models effectively. Each annotation type and technique serves a specific purpose in providing contextual information to machine learning algorithms. By utilizing the appropriate annotation techniques based on the desired outcome, AI models can be trained to accurately interpret and understand visual data in various domains, including object recognition, scene understanding, and autonomous driving. Unlock your business potential with Netscribes’ extensive range of data annotation use cases. Our solutions are designed to optimize performance, reduce costs, and fuel growth. Contact us now to learn more.
OPCFW_CODE
If your Sculpfun S9 (or similar laser) doesn't have the ability to permanently save changes made to the firmware, you may need to re-flash the microcontroller inside of it. This lets the brain running the diode use newer features of GRBL (the software controlling your laser inside it's electronics) effectively, and helps you avoid the possibility of forgetting to click the homing button. While this guide requires no additional physical components, several downloads are needed. This guide was written for older versions of the Arduino IDE released before 2.0. Major changes were made when that version was released, and as such, this guide may not work with them. Please revert to use these steps, or find a newer guide to update your laser. Thank you! To successfully flash your laser, high attention to detail is needed. Flashing your firmware incorrectly can damage your machine. Read the whole guide, start to finish, before trying to flash your machine. Plug in your laser's power and USB, connect it to your computer, and open LightBurn. In LightBurn, use the macro created in the Limit Switch Guide to apply the needed limit switch firmware changes, and afterwards use the Edit -> Machine Settings dialog. In this window, you'll want to click "Save to file". This allows you to save a backup of the laser's configuration that's know to work before we flash firmware and reset all of those settings. Backup via Console¶ Using the Console, you can enter "$$" to get the GRBL values of the machine to supplement those from the Machine Settings window. This gives you a further verification step to make sure the laser's behavior is as intended after flashing. Copy those to a text editor like Notepad or TextEdit and save for safekeeping. At this point, please close LightBurn. Not doing so will cause flashing errors later in this process. To flash a newer version of GRBL to your laser, you'll need to download Arduino IDE. As a toolkit to enable microcontroller development in a friendly bundle, it allows you to make any changes we might need to make, connect to the laser, and send the new version of GRBL to your device. You'l need to install Arduino IDE through the appropriate installer for your system, but the specifics will vary computer to computer. To actually have something to flash to the laser, you'll need GRBL. An off-the-shelf version of GRBL however, assumes we have three axes, not two. As a result, when it goes to home the "Z" axis, it never proceeds to the next step as it physically cannot trigger a limit switch that doesn't exist. To make this step easier, we already made the necessary modifications to the last current (1.1h) GRBL version for you, disabling the third axis. Download the file linked and extract it. Setting Up Arduino IDE¶ In Arduino IDE, we'll need to open the Sketch Menu, click "Include Library", and then "Add .ZIP Library". This allows us to tell The Arduino IDE that the code you've downloaded is something you need it to use. Make sure that "Makefile", "COPYING" and "README.md" are visible in the folder you select; the folder inside of that one labeled grbl is not the correct folder for Arduino IDE to recognize it. In other words, select the grbl folder in the root of the ZIP you extracted. You'll then need to go back to the menu by going to Sketch, Include Library and ensure that "grbl" is listed under "Contributed Libraries". To get ready to flash your board, open the "File" menu, Examples, "grbl" and "grblUpload". It will open a new window containing the sketch mostly ready to go. You'll need to select your board and port still, and tell Arduino how and where to upload the new firmware. Port & Board Configuration¶ We're working with a board resembling the MKS DLC 2.0, running an Atmel 328p. To the Arduino IDE, this is an Arduino Uno, as that's the chip the Uno is based on. Under Tools, Board, select "Arduino Uno" near the top of the list. You'll also need to select the correct port to send the firmware on, so open Tools, Port, and take note of the ports listed. Connect your laser via USB and check the Ports menu again to look for any new devices. Unplug and replug your laser once more to confirm the port the laser is on, and select it. Use the checkmark at the top left to verify that you've set the project up correctly, and note on the bottom of the screen above the black terminal area says "Done compiling." when you do so. If it does, we're ready to upload! If there's an error shown that it can't access the port in some way, make sure that you closed LightBurn. Only one device can "have control" of certain peripherals, and Arduino IDE needs full control of that device for a moment to send the new firmware. In LightBurn, open the Machine Settings menu again, and click "Load from file", selecting the "lbset" or "LightBurn Settings" file we'd made at the start of the guide. At this point, click "Write" to apply these settings to our machine's firmware. This transfers the configuration back to the machine, and writes them to EEPROM, or programmable memory. This means our homing behavior will be consistent through machine reboots. When this has finished, the window will show "controller settings written successfully". In the console, enter Once more and verify that the settings match that of the macro made originally, and/or the backup file we made when backing up via Console. If these settings hold, you'll be able to click the "Home" button in LightBurn and have it home automatically, without any additional use of macros.
OPCFW_CODE
Rick Prelinger, back in 1982, began a collection of rather diverse films under the broad category of being “ephemeral”. These were films which weren’t intended to last forever, but were fit for the moment. They include films made by organisations, home captures and educational films. They usually had a purpose, so their creation and preservation weren’t designed to last. The Internet Archives hold many of these films, and this set comprises metadata for around 2000 films in the Public Domain. This set is (from the obvious source of the Developer Docs!): While the archives currently provide a page for every film, with links to its metadata, and a full-text index to support searching, there is no real formal API over the data. However the Internet Archives do provide some XML files and a simple JSON interface for downloading the metadata. To provide more flexible ways to access the data the dataset has been crawled and loaded into Kasabi. This includes the reviews of the films posted to the Internet Archive itself. The converted dataset exposes the data as Linked Data while retaining links to the original films, allowing developers to access the media files for the films themselves. This is some interesting data, covering a topic which I’ve never encountered before, and has lead me to read up a bit more about the ephemeral films, and dive in to discover films about coffee (for example). The Developer Docs begin in my favourite way, with a list of potential uses of this data: - Development of an improved interface to the archive data, to support exploring an important historical archive - Exploration of video and film annotation using real-world public domain data and media - Cross-linking of the archive content with other company, location and historical archives These uses all sound like a good match for potential apps for the Cultural Data Hackday, so I’m hoping these are interesting and usable. How the Data Works We can have a look at a diagram describing the relationships of the data types, and see how different pieces of data are represented. Straight away, we can see tat the majority of relationships are based around the hub of the concept of “moving image”. That way, each moving image can have certain attributes and properties associated with it. The set uses a list of different schemas to describe different elements of the data, including its own custom vocabulary of terms. This descriptive resource gives us a picture of what’s in the set, how the data is described, and how it all relates. “Films,” for example: Each of the films in the Prelinger Archives is modelled with a type of prel:MovingImage. The URIs for the films have been constructed using the following URI pattern: The identifier for the film is taken from the Internet Archives and is also included in the dataset as the value of a dct:identifier property associated with each film. For SPARQLers out there, there is a published example query found under the SPARL API which lists titles of films along with some important information (film maker and sponsor). Sample queries are there, usually, to demonstrate one context of the data queried using SPARQL, and you can view the full query here. The set publisher, Leigh, has also described how the data was accumulated and modelled, and made his crawler open for collaboration: The data was compiled by using a custom Ruby crawler to lookup the unique identifiers for each of the films in the Prelinger Archive collection, then traversing the Internet Archive site to fetch the XML metadata for the film, its associated media files, and reviews (if any). The crawler builds a local cache of the base metadata which is then converted into N-Triples for loading into Kasabi. This also allows for rebuilding of the dataset without having to unnecessarily load the Internet Archive servers. The code to support the crawling and conversion of this dataset has been open sourced. Developers interested in collaborating to improve the modelling or fix up data conversion errors, can contribute to the project via github. So, what’s in it? I may suffer from a slight obsession with coffee—I’m sure the doctors’ aren’t worried about it—so the first thing I looked for was a simple search for coffee-related data. I was excited to find out it’s an actual topic in the set, so there must be items in there which feature coffee in them: There are 695 results matching coffee, and interestingly, they are commonly featured in films advertising coffee, such as Sanka. I was interested to note that this commercial has several reviews, one of which: This commercial for the long time only decaf instant coffee opens with a proto-Juan Valdez picking coffee beans. Then it moves to how rich this coffee is and smoothly enthuses about the new jar shape. Fairly typical 1960s era coffee commercial, with nothing over the top. I went to the Prelinger Archives home on the Internet Archives to watch the video, too!
OPCFW_CODE
Brief thoughts and insights from Caleb. Mostly on work. Mostly while drinking tea. Real life code talk between two working developers. The runtime is a podcast devoted to asking questions about why we build software the way we do. It has a focus on web development, but will explore widely, looking at how different software is designed, and what makes it great. A podcast for developers interested in building great software products. Every episode, Adam Wathan is joined by a guest to talk about everything from product design and user experience to unit testing and system administration. An interview podcast discussing mental health in software development, with your host, James Brooks. An ongoing conversation about work, internet culture, and what it means to be a mindful 20-something at the beginning of your career. I'm really excited about a new feature in Livewire, it'll come out in the next few days. Let's talk about it.By Caleb Porzio Let's talk about what goes through my headBy Caleb Porzio I feel like a kid againBy Caleb Porzio Niall Crosby, creator of AgGrid, joins the panel to discuss the journey from building an open source data grid used all over the world to providing support and enterprise features and running a successful business based on that same open source software. Panel AJ O'Neal Charles Max Wood Dan Shappir Steve Edwards Guest Niall Crosby Sponsors JavaScri… We talk about drugs, and the great and good drug called transducers.By Daniel Coulbourne & Caleb Porzio Rafael is joined by Mark Henderson and Haadcode to talk about OrbitDB, a distributed database / data mesh for use in peer to peer applications. They talk about what it is like developing in the peer to peer field, talk about developing an oplog CRDT, complain a little bit about safari and browser storage limitations, and discuss how one of the core… Right now, as I record this, I'm in a discord and scouring GitHub hoping that some maintainer swoops in and saves me from my troubles. I'm usually on the other side. This is valuable.By Caleb Porzio Don't pretend it's all hunky dory.By Caleb Porzio Brock is here. NFTs are a total scam. We are making an NFT. We briefly touch on the super complicated topic of transducers.By Daniel Coulbourne & Caleb Porzio ...stay focusedBy Caleb Porzio ...before they're goneBy Caleb Porzio Daniel has no life, only programming. This leads to an episode with zero talk of knots.By Daniel Coulbourne & Caleb Porzio Bianca and Sumitra from Raygun join the panel to talk about Core Web Vitals and how tools like Raygun can help keep tabs on and monitor your performance stats as you change your web application to get you better results on Google. Panel Aimee Knight AJ O'Neal Charles Max Wood Dan Shappir Steve Edwards Guest Bianca Grizhar Sumitra Manga Sponsors Dev… The power of the partnership is not to be underestimated.By Caleb Porzio Like, I got them before, but now I GET them. You know? You know.By Caleb Porzio Rafael is joined by Rosano to discuss Zero Data Apps, a category of applications designed not to hold customer data, but to manipulate customer data that is under the customer's control. Project Cambria (This will be featured on an upcoming episode of the podcast): https://www.inkandswitch.com/cambria.html Gordon Brander's newsletter: https://subco… LINKS: Melvor Idle Hannanie - I Forced 100 People to Pick Cabbages... Now I'm Rich Caleb's 3 Approaches to Arrays of ModelsBy Daniel Coulbourne & Caleb Porzio Folks, we have it pretty damn good. Amirite?By Caleb Porzio I just got back from a fishing trip in the Dacks (imagine that), and came back inspired.By Caleb Porzio This is the first Laracon in a bit that I'm not speaking. Let's talk about it.By Caleb Porzio
OPCFW_CODE
May 30, 2013 This is just a short update to announce that fvwm-mode has moved from my internal Subversion repository to GitHub after a request to update the headers to conform to Emacs standards for the Emacsmirror , the net result of this are two things: - I was reminded of a bunch of stuff I still wanted to do/fix in fvwm-mode, so it might even get done (I’ve already fixed some deprecation warnings, properly supporting lexical binding is next on my list) - fvwm-mode ended up on GitHub as that’s easier for the Emacsmirror to, well, mirror compared to some file on some webserver that I sometimes update. I didn’t bother retaining the original version history as the interesting bits are in the NEWS file anyway. The fvwm-mode repository resides here. November 2, 2011 Earlier searches lead me to believe that storing sent mail on an IMAP folder was not feasible with Gnus. So I was pleasantly surprised that setting: ; Store sent mail in a server IMAP folder on a per year basis (setq gnus-message-archive-group (format-time-string "nnimap+superbia:INBOX.Sent.%Y")) Where “superbia” is the name of my IMAP server in my Gnus config. Performance seems OK so far, so we’ll see how it turns out in the long run. July 5, 2011 I had to work on a bunch of translation Java property files in an ancient J2EE project that had old translations in between the new ones resulting in a hardly readable mess that confused the hell out of Netbeans’ property file editor plugin, so I decided to just move all those old lines to the bottom of the file and comment them out. For this I quickly hacked up two elisp functions (defun move-line-to-end-of-file () "Move the current line to the end of the file." (interactive) (save-excursion (beginning-of-line) (kill-whole-line) (end-of-buffer) (yank-as-comment))) (defun yank-as-comment () "Yank the last killed selection as a comment, but leave text alone that is already a comment according to the current mode." (interactive) (save-excursion (yank) (if (not (comment-only-p (mark) (point))) (comment-region (mark) (point))))) There probably is a much more idiomatic way to do this (which I’d love to hear about), but this seems to work just fine for my use. Note however that move-line-to-end-of-file will just append to the last line if the file doesn’t end with a newline (which isn’t an issue in my case). June 3, 2007 This morning I check in on #fvwm and the first thing I notice is the following: 03/06|01:03:06 < Hun> theBlackDragon: aaaaaaaaaaah! 03/06|01:03:13 < Hun> e22 has been released! So I run off to the Emacs homepage, but nothing to be found about a new release. Hmm. So what can you do? Check usenet of course, and sure enough on gnu.emacs.devel (e-mail addresses removed for obvious reasons): From: David Kastrup <*@*> Subject: Re: Cygwin binaries for Emacs-22.1 released! Newsgroups: gmane.emacs.devel To: Angelo Graziosi <*@*> Cc: email@example.com Date: Sat, 02 Jun 2007 21:18:05 +0200 Angelo Graziosi <*@*> writes: > Cygwin binaries for 22.1 release here: > > http://www.webalice.it/angelo.graziosi/Emacs.html I was going to mouth off about premature announcements and being careless about release numbers, but checking on the ftp server have found that the tarball for Emacs 22.1 has been uploaded. Did I miss the announcement on this list? -- David Kastrup And sure enough, there it was, on the GNU FTP server, all shiny and new: an Emacs 22.1 tarball, the changes from 21.x are a little bit too numerous to sum up here, I’d say just upgrade or read the ChangeLog. January 21, 2007 Everybody knows you can evaluate elisp expressions in the scratch buffer, but a lot less people seem to be aware of ielm the interactive Emacs Lisp mode, this mode opens a sort of Emacs REPL that allows you to work with Emacs Lisp much in the same way you’d do at a Common Lisp REPL. Another little known fact is that you can evaluate elisp expressions at the eshell prompt, just try typing (+ 12 9) and hitting return. Spiffy eh? But you’ll quickly run into eshell’s elisp limitations: it doesn’t correctly indent code so it’s not really useful for ‘real’ elisp hacking, ielm is much better suited for that purpose as it does corretly indent elisp code. December 29, 2006 I don’t remember where I found this snippet (maybe comp.emacs) but it’s mighty useful so I thought I’d share it with you: ; Sets +x on scripts stating with a shebang (if (< emacs-major-version 22) (add-hook 'after-save-hook '(lambda () (progn (and (save-excursion (save-restriction (widen) (goto-char (point-min)) (save-match-data (looking-at "^#!")))) (shell-command (concat "chmod u+x " buffer-file-name)) (message (concat "Saved as script: " buffer-file-name)))))) (add-hook 'after-save-hook 'executable-make-buffer-file-executable-if-script-p)) Emacs versions below 22 need the former, longer method while Emacs 22 can use the oneliner in the last line. September 3, 2006 There are a couple of resources describing how to get Emacs 23 (aka “the unicode branch”) display antialiased fonts (see here for starters). So building on my previous post on installing Emacs 23 from CVS you need to use this configure command: ./configure --prefix=/home/theBlackDragon/local --program-suffix=.emacs-23 --with-gtk --enable-font-backend --with-xft --with-freetype You’ll need to start it like this for example: emacs-23 --enable-font-backend --font "Bitstream Vera Sans Mono-12" You can use an Xresource for the font if you like, but the –enable-font-backend switch is mandatory. May 12, 2006 While reading gmane.emacs.help I came accross this post where somebody asked how he could make a portable app out of Emacs' Windows version. So since I'm still at my internship (where they use Windows) and just recently obtained an iAudio X5L portable media player which behaves like an USB2 harddisk to the OS I wondered if I could manage to make Emacs portable. Short version: I could, the "hardest" part was in fact not even Emacs related, but my sheer lack of knowledge about Windows batch script. I started by creating a "SOFT" folder on my external drive (named E: by Windows on this box) and extracted the altest Emacs build from the ntemacs project in this directory, thus creating "E:\SOFT\emacs" with all the usual subfolders. Next I created a site-lisp/site-start.el file and added the following as suggested on usenet: (let ((home (concat (substring (car command-line-args) 0 2) "/SOFT"))) (setq user-init-file (concat home "/config/.emacs")) (setq auto-save-list-file-prefix (concat home "/config/.emacs.d/auto-save-list/.saves-"))) You'll need to adapt this to your particular environment of course. This makes Emacs load my user-init-file (aka ".emacs") from "E:/SOFT/config/.emacs" and moves the auto-save-list to the external device as well, this should be superfluous though as I change the location of HOME as described later, but it accounts for the cases that you don't want to change your HOME. This is in fact the entire Emacs side of the story, though you will probably have some work making your elisp packages work with relative paths if you didn't take that into account before. Now you need to redefine HOME in Windows (well, you don't need to but I find it useful), you can either do that globally with the System Properties or autoexec.bat in DOS based Windows versions, but I preferred to make it dynamic by writing a wrapper batfile that sets this variable and then launches Emacs. I named it runemacs.bat (how appropriate;)) and put it in Emacs' bin directory, the content looks like this: That's it! You can now run Emacs entirely from your external drive! May 11, 2006 A question that pops up pretty often is how you can add line numbers in the margin of your files the way most other editors do. There are actually two ways of doing this: - adding the numbers in the margin - adding the numbers to your files, so that they get saved with your file, might be useful in certain cases (COBOL source?) Personally I (occasionaly) use setnu.el which works extremely well for my purposes. May 4, 2006 I've been using Emacs for some time now but have only recently started to heavily customize it. With this I've also started tracking (or more actively tracking) various Emacs related resources, picking up useful bits of information along the way. I'll try to share the most useful ones here with you on a (hopefully somewhat) regular basis. I'll kick off with a very short one, the next line enables "focus follows mouse" on Emacs windows (not frames, frames are handled by the window manager). (setq mouse-autoselect-window t) So what is "focus follows mouse" anyway? Basically it means that the window your mouse pointer is over receives focus (input) so you don't need to click on it to give it focus (as you have to in, say, Windows). Some people find this confusing, most people can't live without it once they get used to it. For more information on "focus follows mouse" see here.
OPCFW_CODE
DjiNN is a unified Deep Neural Network (DNN) web-service that allows DNN based applications to make requests to DjiNN. Visit our Downloads page to get the latest version of DjiNN and the Tonic applications. Djinn uses Caffe* for the DNN forward pass computation. Additionally, Tonic Suite depends on the following packages: If you have built Tonic Suite, which means you should already have Caffe installed, skip this step. Caffe is under active development and since some of the latest changes may break downstream projects, we provide users with a snapshot version at our Downloads page that is verified to build with Tonic Suite. Caffe can be built using different libraries as detailed here and we recommend reading their installation process to get familiar with the process. $ tar xzf caffe.tar.gz $ cd caffe; $ ./get-libs.sh $ sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libboost-all-dev libhdf5-serial-dev libopenblas-dev Makefile.config and set the flags following the intended build environment. $ make -j 4 $ make distribute Set the Caffe installation path in common/Makefile.config, for example Build the tonic library (libtonic.a) and download the pretrained models in $ tar xzf djinn-1.0.tar.gz $ cd common $ make $ cd weights $ ./dl_djinn_weights.sh Building the DjiNN server service/, build the DjiNN server: $ make -j 4 Running the DjiNN server Before running the Tonic applications, start the DjiNN service as follows: $ ./djinn --common ../common/ --weights weights/ --portno 8080 --gpu 1 --debug 0 --nets nets.txt The above command will launch the DjiNN service using the GPU to process all the received requests. If “–gpu 0” is passed to ./dnn-server, DjiNN service will use the CPU to process requests instead. The nets.txt file defines all the networks DjiNN should load at initialization from After the DjiNN service is started, open a new terminal, navigate to the Tonic Suite folder to send requests to DjiNN. You need to build the corresponding Tonic application in order to utilize the DjiNN server. To build the Tonic applications, follow these instructions. The following is an example of how to send an IMC request to DjiNN: $ cd djinn/tonic-suite/img $ ./tonic-img --task imc --djinn 1 --input imc-list.txt --hostname localhost --portno 8080 The above command is analogous for all the IMG (imc, dig, face) , ASR and NLP (pos, chk, ner) applications. Several parameters can be passed to ./tonic-img to configure the applications to run. “–task” configures which type of image request (imc, dig or face) will be launched. “–djinn” configures to use DjiNN service or process the task locally. “–hostname” and “–portno” configure the IP address and port number of DjiNN service. Citing DjiNN[bibshow file=clarity] If you use DjiNN in your research, please cite the official publication [bibcite key=hauswald15isca].
OPCFW_CODE
What method would you recommend to make Operating System API calls in a cross-platform library you are building? Let me explain further .. Is it advisable to use , something like : #ifdef _WIN32 void someFunction() { // windows version of the function which makes Windows API Calls } #elif defined(__linux__) void someFunction() { // linux version of the function which makes POSIX calls } #endif OR create separate source code files for each operating systems, and then use conditional codes in makefiles or any build tools to compile the source code based on the operating system ??? There's no clear right answer that people have settled on. Both of the approaches you mention are commonly used. Feel free to use your judgment and decide for yourself. What are the advantages and disadvantages of either approach? I've seen projects that used both variants. Depending on the particular spot, they made a decision for either variant. First of all, "best" is at some level opinion-based and might get this question closed. However, my view is that you should work from a standpoint that there is one set of Portable Operating System Interfaces (+X), and that these are what you code to. Then, for systems that lack some or all of the portable interfaces, you can provide drop-in replacement implementations built on top of whatever OS-specific interfaces the particular oddball operating system wants you to use. This allows you to keep complex conditionals and OS-specific logic out of your actual program logic, and isolate it all as platform-support shims. Some people prefer to do this by building their own portability layer (APR, NSPR, etc.) and treating POSIX just as one of the backends for it. I strongly recommend against doing this, since: It imposes significant levels of overhead on systems that already have a portable interface. It makes your code hard to read by people who know the standard interfaces but who aren't familar with your own portability layer. It makes your code hard to reuse in projects where your portability layer isn't a good fit. It's a huge rabbit hole of yak-shaving that will bog you down and take all your time away from whatever you actually wanted to code. This is a good answer, although I technically disagree with it. In any case, it offers yet another alternative approach: Just don't code your own OS-specific abstractions but use an existing one, like e.g. the mentioned Apache Portable Runtime (APR). @UlrichEckhardt: Indeed that's an option, but actually reading the source to those was my motivation for bullet point 1. I once hit a problem with svn's code paths to print messages, and found it going through like 4 layers of "portable runtime" stuff to do what was the equivalent of a single portable printf call. (And ultimately the problem was because whatever it was doing back then in one of these layers was not actually portable.)
STACK_EXCHANGE
Missing many F* files from the main FStar repo Preliminary Steps Please confirm you have... [x] reviewed How Linguist Works, [x] reviewed the Troubleshooting docs, [x] considered implementing an override, [x] verified an issue has not already been logged for your issue (linguist issues). Problem Description With F* support added recently (#4178) we can finally benefit from linguist support in the main F* repo at https://github.com/FStarLang/FStar, which is great. However, the language statistics we get for this repo still seem highly inaccurate, since linguist fails to detect many of the F* files in that repo. For instance no F* file is detected in the examples or doc/tutorial/code dirs: $ github-linguist --breakdown | grep examples (git)-[master] $ (complete breakdown here https://gist.github.com/catalin-hritcu/435ce9d171ca8e52ae3f09822d15749b) ... although the examples dir contains 1000+ F* files $ find examples -name "*.fst" | wc -l (git)-[master] 1179 $ Running linguist directly on one of the files does detect the right language, just that it seems that linguist somehow completely misses these files: $ github-linguist examples/hello/hello.fst hello.fst: 9 lines (5 sloc) type: Text mime type: image/vnd.fst language: F* I've also tried adding various overrides, but they had no effect on this issue: examples/* linguist-vendored=false examples/* linguist-generated=false examples/* linguist-detectable=true examples/hello/* linguist-vendored=false examples/hello/* linguist-generated=false examples/hello/* linguist-detectable=true examples/hello/hello.fst linguist-vendored=false examples/hello/hello.fst linguist-generated=false examples/hello/hello.fst linguist-detectable=true URL of the affected repository: https://github.com/FStarLang/FStar Last modified on: 2018-10-02 Expected language: Detected language: Files in examples/ and doc/ are considered documentation by Linguist. You can use the linguist-documentation override to force Linguist to count these files in language statistics. @pchaigno Thanks for the quick answer. I tried adding the following to .gitattributes: examples/* linguist-documentation=false examples/hello/* linguist-documentation=false examples/hello/hello.fst linguist-documentation=false but this seems to have no effect either. The following seems to work for me: *.fst linguist-documentation=false Otherwise, to match all files under examples/, I think you need: examples/**/*.fst linguist-documentation=false This seems indeed to work on your fork, but I can't reproduce the success locally with Linguist v7.0.0 (and ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux]). I could try a blind fix on the FStar github repo, but I still have no clue what's going on. On my local machine, all of the following have no effect whatsoever: *.fst linguist-documentation=false *.fsti linguist-documentation=false examples/**/*.fst linguist-documentation=false examples linguist-documentation=false examples/* linguist-documentation=false examples/*/*.fst linguist-documentation=false examples/*/*.fsti linguist-documentation=false examples/hello/* linguist-documentation=false examples/hello/hello.fst linguist-documentation=false doc/tutorial/code/* linguist-documentation=false doc/tutorial/code/exercises/Ex01a.fst linguist-documentation=false I still get: $ github-linguist --breakdown | grep examples $ Did the blind fix (https://github.com/FStarLang/FStar/commit/1e453cdd2523cef9a3ec0ba607acf69301def37b) and it indeed seems to work for https://github.com/FStarLang/FStar, but as I said I still don't understand why this doesn't also work for me locally. Wow! Me pushing this to F* master made it work locally. What's going on? Does linguist ignore local changes to files? I missed the following line in the documentation. When testing with a local installation of Linguist, take note that the added attributes will not take effect until the .gitattributes file is commited to your repository. So while it seems odd to me, it's a documented (mis-)feature. Sorry for the noise! So while it seems odd to me, it's a documented (mis-)feature. Sorry for the noise! Oh, right. Sorry, I forgot to mention that :S
GITHUB_ARCHIVE
Can send to all but one user account - 'serviceUrl' cannot be null or empty Using Company Communicator, we are successful in sending to all but one account in the tenant. The sending account never receives the message, with the log saying "'serviceUrl' cannot be null or empty". Any idea how to address this? Version 4.1.2 is installed and seems to work for everything but sending to the one account. Member | Failed | -2 : System.ArgumentException: 'serviceUrl' cannot be null or empty (Parameter 'serviceUrl') Make sure that when you created the app registration you selected multi tenant as the option. Also make sure that all the url settings on the app configuration matches the correct url created by the ARM template Get Outlook for Androidhttps://aka.ms/AAb9ysg From: Glenn Blinckmann @.> Sent: Monday, June 28, 2021 8:31:11 PM To: OfficeDev/microsoft-teams-apps-company-communicator @.> Cc: Subscribed @.***> Subject: [OfficeDev/microsoft-teams-apps-company-communicator] Can send to all but one user account - 'serviceUrl' cannot be null or empty (#446) Using Company Communicator, we are successful in sending to all but one account in the tenant. The sending account never receives the message, with the log saying "'serviceUrl' cannot be null or empty". Any idea how to address this? Version 4.1.2 is installed and seems to work for everything but sending to the one account. Member | Failed | -2 : System.ArgumentException: 'serviceUrl' cannot be null or empty (Parameter 'serviceUrl') — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FOfficeDev%2Fmicrosoft-teams-apps-company-communicator%2Fissues%2F446&data=04|01|stephena%40tihsa.co.za|b5e467176c884fa0f76608d93a62e85b|8db3ceb8744648bc8bb497803c527a37|0|0|637605018854263559|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=lPM4dCPWdlnRFtdB%2BlWAoy1s5GIeuxmXMvNoFAsnKhI%3D&reserved=0, or unsubscribehttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAGEUQZEUO5P4WMGG6NRRY4DTVC5W7ANCNFSM47OPW72A&data=04|01|stephena%40tihsa.co.za|b5e467176c884fa0f76608d93a62e85b|8db3ceb8744648bc8bb497803c527a37|0|0|637605018854263559|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=NO%2B6GgDtUSWFBYTLuab6bCue3hOYSkcFVB33jYu6Ou4%3D&reserved=0. Suitable for Public Use Hi @glblinck, Navigate to Azure resource group cc app -> Open storage account -> Click on Storage explorer(preview)->Delete "UserData" table and recreate the table. When you send the message next time user data will be reloaded.
GITHUB_ARCHIVE
[ts] Property 'createLogger' does not exist on type '{}'. Please tell us about your environment: winston version? [x] winston@2 [x] winston@3 node -v outputs: Operating System? macOS Language? ASP.NET Core, TypeScript 2.9.1, es5, ReactJS What is the problem? After installing ASP.NET Core with React.js dotnet new react And add the getting-started example for either winston3 or winston2 on Home.tsx import * as React from 'react'; import { RouteComponentProps } from 'react-router'; const winston = require('winston'); const logger = winston.createLogger({ transports: [ new winston.transports.Console(), new winston.transports.File({ filename: 'combined.log' }) ] }); logger.log({ level: 'info', message: 'Hello distributed log files!' }); I'm seeing compilation errors from TypeScript. What do you expect to happen instead? I'm expecting to be able to log to Console and File. {"level":"info","message":"Hello distributed log files!"} Other information My tsconfig.json: { "compilerOptions": { "baseUrl": ".", "module": "es2015", "moduleResolution": "node", "target": "es5", "jsx": "react", "sourceMap": true, "skipDefaultLibCheck": true, "strict": true, "types": ["webpack-env"] }, "exclude": [ "bin", "node_modules" ] } I can fix the TypeScript error by defining const winstonType: any = winston; const logger = winstonType.createLogger({ transports: [ new winstonType.transports.Console(), new winstonType.transports.File({ filename: 'combined.log' }) ] }); But then get errors: Module not found: Error: Can't resolve 'fs' in '/hello-logging-winston/node_modules/winston/lib/winston' Module not found: Error: Can't resolve 'fs' in '/hello-logging-winston/node_modules/winston/lib/winston/transports' I've since found that ReactJS is not supported but is on the v3 roadmap. According to https://github.com/winstonjs/winston/issues/900, I must use import * as winston from 'winston' in order to get all exports. That fixes the compiler issues I was getting. However, I am still getting the errors Module Not Found Can't resolve 'fs' ... Can I get some direction on this, please?? Sorry, what version of winston are you using exactly? Can you try winston@master, by e.g. specifying ... "winston": "winstonjs/winston#master", ... in your package.json? (And then reinstall node modules, e.g. rm -rf node_modules && npm install .) It does look like you're running into other errors as well (e.g. with react), or possibly your project is trying to typecheck TS definitions under node_modules, which it shouldn't do (that should be some setting in tsconfig.json or something). Are you using @types/winston with Winston 3.0? Thanks for the replies! I was using<EMAIL_ADDRESS>npm list winston <EMAIL_ADDRESS>/Users/ddumaresq/Projects/EPAS2/hello-logging-winston └──<EMAIL_ADDRESS> And have now updated including @types/winston. npm list winston <EMAIL_ADDRESS>/Users/ddumaresq/Projects/EPAS2/hello-logging-winston └──<EMAIL_ADDRESS>(github:winstonjs/winston#8be746b4fba623c7167420c887ee9cf3d4147664) npm list @types/winston <EMAIL_ADDRESS>/Users/ddumaresq/Projects/EPAS2/hello-logging-winston └──<EMAIL_ADDRESS> Still getting errors, though. I believe the issue is that @types/winston is still designed for version 2.x of winston, which has some major differences from version 3.x. I think once @types/winston is updated to version 3.x, you should be able to use that and this error will go away. I've removed @types/winston and get Hmmm I don't have any ideas off the top of my head. I've never used winston with react personally. Cool, those seem like legitimate errors. The file transport uses the native Node fs module for writing to log files on a server. It's not possible to directly use the fs module in the browser since the browser filesystem (e.g. HTML5 local storage) is very different than the native fs e.g. in a server environment (and you can't directly log to a cilent's filesystem from the browser -- security issues etc.). You could try another transport like the console transport that doesn't rely on that module. Otherwise you would need to use some other transport that is browser-compatible. Btw if you add the following to your webpack config, it seems the console transport at least works. But the point is you just ignore the fs module... node: { fs: 'empty' } This worked for me in a test webpack project with the console transport. Good luck! @DABH - I've followed your advice and fixed a number of errors. But one remains: ERROR in [at-loader] ./node_modules/winston/index.d.ts:22:15 TS2304: Cannot find name 'Map'. Are you sure you’re using ES6? https://stackoverflow.com/a/49569255/4451284 ES5 will not work since it’s missing types like Map (If you have to use ES5 for some reason, I think there are polyfills available for Map etc. that could make things work... but just use ES6 ;) ) Thanks, it fixes the Map error, but unleashes many more @types errors in react, react-dom, react-router and history. I'll look at polyfills for Map... Going to close this for now (seems we are outside the scope of winston issues) but feel free to open a new issue if you find any new winston-specific problems. Thanks! i am getting this error . ERROR in ./node_modules/winston/dist/winston/tail-file.js Module not found: Error: Can't resolve 'fs' in 'C:\Users\703247433\Desktop\winstontoolkit\toolkit\node_modules\winston\dist\winston' ERROR in ./node_modules/winston/dist/winston/transports/file.js Module not found: Error: Can't resolve 'fs' in 'C:\Users\703247433\Desktop\winstontoolkit\toolkit\node_modules\winston\dist\winston\transports'
GITHUB_ARCHIVE
Users who mentioned each other in Gremlin We have a smaller example twitter database: user -[TWEETED]-> tweet -[MENTIONED]-> user2 and I would like to find out how to write a query in Gremlin, that shows who were the users who mentioned each other. I have already read the docs but I don't know how to do it. so you would also need a "user2 tweet that mentioned user" to have a scenario where user and user2 mentioned each other, right? yes, exactly, but the user tweeted tweet... line only wants to show you the "schema" Given this sample data that assume marko and stephen mention each other and marko and daniel mention each other: g = new TinkerGraph() vMarko = g.addVertex("marko", [type:"user"]) vStephen = g.addVertex("stephen", [type:"user"]) vDaniel = g.addVertex("daniel", [type:"user"]) vTweetm1s = g.addVertex("m1s", [type:"tweet"]) vTweetm2d = g.addVertex("m2d", [type:"tweet"]) vTweets1m = g.addVertex("s1m", [type:"tweet"]) vTweetd1m = g.addVertex("d1m", [type:"tweet"]) vMarko.addEdge("tweeted",vTweetm1s) vMarko.addEdge("tweeted",vTweetm2d) vStephen.addEdge("tweeted",vTweets1m) vDaniel.addEdge("tweeted",vTweetd1m) vTweetm1s.addEdge("mentioned", vStephen) vTweetm2d.addEdge("mentioned", vDaniel) vTweets1m.addEdge("mentioned", vMarko) vTweetd1m.addEdge("mentioned", vMarko) you could handle it with the following: gremlin> g.V.has("type","user").as('s') .out("tweeted").out("mentioned").as('m').out("tweeted") .out("mentioned").as('e').select.filter{it[0]==it[2]} ==>[s:v[daniel], m:v[marko], e:v[daniel]] ==>[s:v[stephen], m:v[marko], e:v[stephen]] ==>[s:v[marko], m:v[stephen], e:v[marko]] ==>[s:v[marko], m:v[daniel], e:v[marko]] This approach uses select to extract the data from the labelled steps then a final filter to find those where "s" (vertex in the first position) is equal to the "e" (vertex in the final position). This of course means that there is cycle pattern detected where the one user mentioned another and the other mentioned that person back at some point. If you follow that much then we can clean up the result a little bit so as to get the unique set of pairs: gremlin> g.V.has("type","user").as('s') .out("tweeted").out("mentioned").as('m') .out("tweeted").out("mentioned").as('e') .select.filter{it[0]==it[2]} .transform{[it[0].id,it[1].id] as Set}.toList() as Set ==>[daniel, marko] ==>[stephen, marko] By adding a transform to the previous code, we can convert the result to "id" (the user's name in this case) and flip everything to Set so as to get unique pairs of results. one more "extra": what if we only need those, who do not KNOW each other? in SQL this would be something like "not in" or "minus" do you mean, someone who mentioned someone who never mentioned them back? I mean "adding another relation" to the schema, like: user1 -Knows-> user2 and using this I would like to add the restriction that user1 mentioned user 2 and vice-versa user1 does not know user 2 and vice-versa maybe just extend the filter after the select to check the "knows" edge. .filter{it[0]==it[2] && !it[0].both('knows').retain([it[1]]).hasNext()}
STACK_EXCHANGE
[Piglit] [PATCH] Revert "fs-discard-exit-3: New test for another bug in handling 1.30's discard rule." eric at anholt.net Mon Feb 2 12:05:56 PST 2015 Francisco Jerez <currojerez at riseup.net> writes: > Eric Anholt <eric at anholt.net> writes: >> Francisco Jerez <currojerez at riseup.net> writes: >>> This reverts commit 3fad0868f023f1d726e230968a4df3327de38823. >>> This test doesn't make any sense to me, it begins quoting the GLSL >>> 1.30 spec on the interaction of the discard keyword with control flow: >>> "[...] Control flow exits the shader, and subsequent implicit or >>> explicit derivatives are undefined when this control flow is >>> non-uniform (meaning different fragments within the primitive take >>> different control paths)." >>> IOW the discard keyword is a control flow statement that can >>> potentially make subsequent derivatives undefined if only some subset >>> of the fragments execute it. The test then goes on and does the exact >>> opposite: It samples a texture after a non-uniform discard expecting >>> that implicit derivatives will be calculated correctly, while >>> according to the spec quotation they have undefined results. >>> If the quoted text doesn't seem clear enough, see section 6.4 "Jumps" >>> of the same specification: >>> "These are the jumps: >>> discard; // in the fragment shader language only" >>> and section 8.7 "Texture Lookup Functions": >>> "Implicit derivatives are undefined within non-uniform control flow >>> and for vertex shader texture fetches." >>> More recent spec versions have made the meaning of non-uniform control >>> clearer. From the GLSL spec version 4.4, section 3.8.2 "Uniform and >>> Non-Uniform Control Flow": >>> "Control flow becomes non-uniform when different fragments take >>> different paths through control-flow statements (selection, >>> iteration, and jumps). [...] Other examples of non-uniform flow >>> control can occur within switch statements and after conditional >>> breaks, continues, early returns, and after fragment discards, when >>> the condition is true for some fragments but not others." >>> There was some discussion about this topic in Khronos bug 5449, which >>> motivated the inclusion of the first sentence quoted above in the GLSL >>> 1.30 spec. The rationale was that continuing the execution of >>> discarded fragments after a non-uniform discard would be ill-defined >>> because it could easily lead to infinite loops and break invariants of >>> the program. >> Yeah, on the other hand, we found that not continuing the execution of >> discarded fragments within a subspan that contained still-enabled >> fragments caused incorrect rendering. >> See 9e9ae280e215988287b0f875c81bc2e146b9f5dd. > How about a drirc option to (partially) support derivatives after > non-uniform discard for applications that rely on this non-compliant > behaviour? Do you remember any other applications that relied on this > other than Tropics? [which BTW has been broken for half a year for > other reasons]. > It's unfortunate that we end up emitting extra code (for keeping track > of the enabled-but-discarded channel mask and for terminating loops > early) and that we run more channels than necessary on *all* > applications with the only purpose of enabling this non-compliant > behaviour probably very few applications rely on. > [Cross-posting to mesa-dev because this is more of an implementation I think we ended up fixing GLB2.7 with the revert, as well? This is a long time ago. f you want to experiment with following the spec behavior (even though apparently other implementations didn't), you're going to want to test a *lot* of apps. -------------- next part -------------- A non-text attachment was scrubbed... Size: 818 bytes Desc: not available More information about the Piglit
OPCFW_CODE
Hi Henning Hi everyone. Thanks for your time. I'll correct the errors right away. The only reason that we used source is that we can keep up to date easier. With xsane and previously the gimp 1.1 series there were many updates and it was important for us in our changeover from crashintosh's that we had the very best that was available. All we can say is that the method we use works for us and our friends and that we have never had any niggles with it that many other users seem to have. The only bit we don't understand is the bit about the /etc/ld.so.conf. We have never touched this file before. Where does it fit in? Best wishes from all at FeF, Spain. On Friday 20 April 2001 22:47, you wrote: > On Fri, Apr 20, 2001 at 07:05:44AM +0200, scc wrote: > > We put together a step by step guide for sane specifically for stuff like > > this at: > > www.arrakis.es/~fsanta/sane/howto.html > > it has been tested with Mandrake too and it works but it would mean you > > having to start over again with the installation. > You are asking for suggestions on your page so here they are: :-) > * Maybe add a paragraph about why going through all the hassle of > installing everything from source. Some distributions (e.g. SuSE > 7.1) come with current packages of SANE so I would only recommend to > compile SANE if something doesn't work. Same goes for gtk and gimp, > I would only download and compile them if the installed binaries are > too old to compile xsane. > You write that glib and gtk that comes with the distribution > should be removed. This will cause trouble with programs that came > with the distribution and that are linked against gtk. I wouldn't > recommend to do it if you aren't really shure what you are doing. > In my experience the most problems of beginners when compiling > SANE and a frontend are: Headers of X, gtk, gimp not installed. > /usr/local/lib not in /etc/ld.so.conf. /usr/local/lib/sane in > /etc/ld.so.conf by mistake. /usr/local/bin not in $PATH. > Maybe I missed something but you don't mention the /etc/ld.so.conf > thing. However, I'm pretty sure that at least for Debian potato this > entry is absolutely necessary. Compiling xsane won't work without it. > * I would be very careful with copying kernel headers to glibc > directories. This may work for SANE but may break compilation of > other programs using SCSI. And you probably won't remember that you > changed /usr/include/scsi if these problems occur. > Because this is for beginners, I would add that you > need to have installed the kernel source or at least the kernel > What I did when having problems with different sg.h files in > /usr/include and /usr/src/linux was to just rename sg.h to something > else. Then the file in /usr/src/linux was used automatically. > * Better write down that the permissions of /dev/sg0 must be rw for > "other". At least I can't read the text in your file manager. If > more than one person uses the system, I wouldn't do that however. > Better set /dev/sg0 to rw for a group (e.g. "scanner") and add all > the users that are allowed to scan to that group. > * Why do you make the link to xsane (in the users ~/.gimp directory) as > root? And be sure to mention that gimp shouldn't be run as root. > * The "Special note: USB scanners" contains some unclear points and > some mistakes in my opinion: The second sentence says to include the > following line into modules.conf. The following line is "Model : > Vendor ID : Product ID" :-). > You are asking for vendor/product ids of other scanners. Maybe you > want to link to http://www.linux-usb.org, they have a list of such > You write that it's necessary to add "modprobe scanner vendor=<your > vendor ID> product=<your product ID>" to modules.conf. However, > modules.conf contains the options of the modules, not the modprobe > itsself. So something like "options scanner vendor=<your vendor ID> > product=<your product ID>" should work. By the way: In principle > scanners should work without manually adding the ids, if they don't > please send the necessary ids and the name of the scanner to the > author of the scanner module for inclusion. > Better add a warning that at least for Debian "modules.conf" shouldn't > be changed manually. As the comment in this file says: > # Please do not edit this file directly. If you want to change or add > # anything please take a look at the files in /etc/modutils and read > # the manpage for update-modules. > Concerning *.conf when you don't use the epson backend: I'm not sure > if every backend uses the "usb /dev/something" entry for *.conf. So > better point to the documentation. > After the paragraph about epson.conf you write "Mine looks like > this: modprobe scanner vendor=0x04b8 product=ox010c". This should be > moved to the paragraph about modules.conf (and exchange "modprobe" by > "options"). The hex codes must start with 0x not ox. > I would *not* add the link from /dev/usbscanner to /dev/scanner. > /dev/scanner is used by all the SCSI backends and looking at USB > device files as a SCSI generic file may cause havoc. > Generally: For all things you need to do as root I would recommend to > use something like 'su -c "make install"'. This way there is no risk to > forget to "exit" the su shell. > Source code, list archive, and docs: http://www.mostang.com/sane/ > To unsubscribe: echo unsubscribe sane-devel | mail firstname.lastname@example.org -- Source code, list archive, and docs: http://www.mostang.com/sane/ To unsubscribe: echo unsubscribe sane-devel | mail email@example.com This archive was generated by hypermail 2b29 : Sat Apr 21 2001 - 11:26:16 PDT
OPCFW_CODE
I encountered this while investigating current mochitest hangs. Consider the case where an output timeout has triggered a break from the loop at https://dxr.mozilla.org/mozilla-central/rev/cad9c9573579698c223b4b6cb53ca723cd930ad2/testing/mozbase/mozprocess/mozprocess/processhandler.py#980 Now the ProcessReader continues to wait for the stdout/stderr reader threads to end: https://dxr.mozilla.org/mozilla-central/rev/cad9c9573579698c223b4b6cb53ca723cd930ad2/testing/mozbase/mozprocess/mozprocess/processhandler.py#996-999 I think those threads normally end because the test harness output timeout handler has killed the process (typically via mozcrash or similar killAndGetStack()), causing readline() to return None: https://dxr.mozilla.org/mozilla-central/rev/cad9c9573579698c223b4b6cb53ca723cd930ad2/testing/mozbase/mozprocess/mozprocess/processhandler.py#933-935 But what if the process didn't die (bug 1404190?), and the stream remains open? The ProcessReader knows it has timed out, has drained its queue and isn't interested in any more output, but it still waits for those threads...possibly forever. I don't know how to proceed here. The existing code is efficient and straight forward, blocking on readline(). It looks to be very difficult to interrupt a thread in python...and probably dangerous. Other than killing the process (since we are having trouble with that), is there a way to break out of readline()? Close the stream?? asyncio could be helpful...but it is only available in python 3, afaik. (In reply to Geoff Brown [:gbrown] from comment #1) > Close the stream?? I tried closing stdout/stderr from another thread (threading.Timer), but it didn't work. It just threw: IOError: close() called during concurrent operation on the same file object. and left the reader thread blocked on readline(). Bug https://bugzilla.mozilla.org/show_bug.cgi?id=1391545 which is recommended for disable is depending on this bug. :whimboo, :gbrown do you guys have any updates regarding dis bug? I'm not working on this bug and don't have any new ideas for unblocking it. This bug could theoretically cause bug 1391545, but I don't know if it *actually* is a cause of 1391545. I don't have updates neither. A better approach would be to get Marionette client logging implemented via bug 1394381, which should also be on the deps list of bug 1391545. I will add it. You need to log in before you can comment on or make changes to this bug.
OPCFW_CODE
I’ll just get this out of the way at the start: there are lots of questions like this on the forum (especially lately for some reason). Pretty much all of the advice there will be relevant, so have a look around the site (there is a search bar). “Should I” is a question only you can really answer. If the question is, are there jobs in bioinformatics, or, will I be employable? Then yes, absolutely. It would not be a ‘waste’ of you time to follow it as a career. If you have some practical biological education, then hold on to that firmly too, as bioinformaticians with true biological insight are still a rare (and much more sought after) breed. Unlike the first answer here, I dont particularly place much value on MSc’s. You can get on to PhD courses directly without needing one, and its perfectly valid to ‘learn on the job’. Particularly if you want to be what I call a “Type II” bioinformatican - i.e. analysing data rather than writing tools specifically. The only thing in your post that would give me pause I think is that you state you want to go in to academia. This is, of course, totally possible, however, academia is increasingly difficult to make a path in to. In the UK, less than 1% of PhD graduates ever go on to hold a professorship, for instance. The reason I mention this specifically is that I believe bioinformatics has an even harder time of it than most. Bioinformaticians still often struggle against being seen as a ‘service’ rather than a legitimate discipline in their own right. This can (does and has) make pushing your way in to academia as a bioinformatician that bit more difficult. It’s improving over time for sure, but there are still a breed of academics that think bioinformaticians should be ‘seen but not heard’. It’s not all doom and gloom however. Academia has always been notoriously difficult to navigate. I would seriously recommend giving some consideration to joining biotech startups. They’re growing exponentially, and are very thirsty for informaticians. You’d get to do all the same work in all likelihood (and be paid more to boot!). Some practical advice for an interview scenario might include: - Ensure you’re as comfortable with programming and the Unix commandline as you can reasonably be. - Familiarise yourself with publications in the type of data you’re interested, and particularly get to know at least the names of some of the big software suites in that area. DISCLAIMER: My opinions are based heavily on my experience in the UK system. It’s possible that elsewhere (e.g. Italy) MSc courses are more highly valued for example. So, take what I’ve said just at face value!
OPCFW_CODE
I guess kernel panic occured due to one of the following reasons or may be more 1 . Compiling the kernel is basically installing the kernel so instead of replacing the old kernel it instead got stuck in deadlock where neither of the kernels were able to use the memory thus the panic. The new kernel entered into some kind of loop, think it this way "it tried to call itself" or simply it became a type of fork bomb thus the out of memory error and panic. The partition allocated for the new kernel was not sufficent. Try to make a different partition,mount it,sys link it so that others programs can execute it. Now re compile it using gcc and chains(linker) Instead of replacing the kernel it killed itself. If you extract a kernel tarball which is initially 300 mb of size, after extracting i.e "tar -xvf kernel.zip" it becomes a giant of 900 mb of size. After installation of packages like gcc,g++,gnu-make,etc and compiling it, the compiled kernel may become a mamoth of 2 GB of space. So If it the partition doesnt have that much space out of memory may occur. You didnt provide a swap area. As building a kernel is resource intensive it needs both a good amount of space,a swap are and ram. During installation it creates sub folder like /mount, /proc, /boot ,/dev etc . So that each part gets respective location which some how failed due to out of memory. Usually a kernel is compiled in chroot like enviroment, but if the chroot is not properly set it may fail to get a resource like proc,mnt,dev thus the panic. If you still need some help then checkout these two places #1 kernel team for ubuntu , #2 kernel debugging team for ubuntu As your update seems to implying compiling the kernel worked. It could only mean that you compiled the whole kernel. Usually a kernel has everthing in it which may be important to one but a waste of space and time for other. So either you could use the new kernels from ubuntu kernel . In my case its v4.14-rc1.Download the kernel which match your arch. I have x86_64 so I would choose 64 bits one. I would download these and would then issue "sudo dpkg -i *.deb" to compile all. Now it seems that the kernel you compiled are of more than 1GB thus its using more resources. The ubuntu kernel has many parts stripped out so its low i size and uses heavy compression. Your kernel might have everything in it and can be used by IOT,servers,desktop, and may be compatible with arm 64,amd64, etc as it may have all the modules present to run these. So to reduce it you need to remove all the parts useless in your case like drivers for hardware you never have heard of. Stripping server parts where ever possible. Great work by the way as kernel compiling is a headache as its uses intensive resource ,time and patience As you mentioned that your initrd.img file was huge. Could you atleast post a screenshot of your /boot/ or in terminal use "ls -l " to list the size of everything. Kernel happens to be the heart of any Linux distro. Its very complex so without knowing anything one can just guess whats the issue. I would still suggest you to strips the unwanted modules from your kernel and then compile it.
OPCFW_CODE
import curses import random import sys #260 left #261 right #259 up #258 down def main(screen): width = 40 height = 20 screen.nodelay(True) curses.curs_set(0) window = curses.newwin(height, width, 0, 0) x = 4 y = 4 direction = 261 old_coordinates = [(x, y),(x-1, y),(x-2, y)] initial_delay = 100 score = 0 def get_direction(c): if c == 260 or c == 261 or c == 259 or c == 258: if direction == 261 and c == 260: pass elif direction == 260 and c == 261: pass elif direction == 259 and c == 258: pass elif direction == 258 and c == 259: pass else: return c return direction def get_coordinates(x, y): if direction == 260: x -= 1 return x, y if direction == 261: x += 1 return x, y if direction == 259: y -= 1 return x, y if direction == 258: y += 1 return x, y def create_apple(): while True: applex = random.randint(1, width - 2) appley = random.randint(3, height - 3) for coordx, coordy in old_coordinates: if applex == coordx and appley == coordy: continue break return applex, appley def center_text(text): text_center = int(len(text) / 2) return text_center def new_game(): lastb = 260 while True: window.clear() screen.clear() draw_walls(new_game=True) centerx = int((width - 1) / 2) thirdy = int((height -2) / 3) forthy = int((height -2) / 4) offset_game_over = center_text("@SNAKE v.1.0") window.addstr(forthy, centerx - offset_game_over, "@SNAKE v.1.0") b = screen.getch() if b != 260 and b != 261: if b == 10: if lastb == 260: return if lastb == 261: sys.exit() b = lastb if b == 260: window.addstr(thirdy * 2, 5, "START GAME", curses.A_STANDOUT) window.addstr(thirdy * 2, width -11, "QUIT") lastb = 260 if b == 261: window.addstr(thirdy * 2, 5, "START GAME") window.addstr(thirdy * 2, width -11, "QUIT",curses.A_STANDOUT) lastb = 261 window.refresh() screen.refresh() curses.napms(100) def game_over(): lastb = 260 while True: window.clear() screen.clear() draw_walls() update_score(score) create_title() centerx = int((width - 1) / 2) thirdy = int((height -2) / 3) forthy = int((height -2) / 4) offset_game_over = center_text("GAME OVER") window.addstr(forthy, centerx - offset_game_over, "GAME OVER") b = screen.getch() if b != 260 and b != 261: if b == 10: if lastb == 260: return False if lastb == 261: sys.exit() b = lastb if b == 260: window.addstr(thirdy * 2, 5, "PLAY AGAIN", curses.A_STANDOUT) window.addstr(thirdy * 2, width -11, "QUIT") lastb = 260 if b == 261: window.addstr(thirdy * 2, 5, "PLAY AGAIN") window.addstr(thirdy * 2, width -11, "QUIT",curses.A_STANDOUT) lastb = 261 window.refresh() screen.refresh() curses.napms(100) def draw_walls(new_game=False): for n in range(width): window.addch(0, n, "-") if not new_game: window.addch(2, n, "-") window.addch(height - 2, n, "-") for n in range(height - 1): window.addch(n, 0, "|") window.addch(n, width - 1, "|") def collision_check(x, y): for coordx, coordy in old_coordinates: if x == coordx and y == coordy: return True if x < 1 or x > width - 2 or y < 3 or y > height - 3: return True def update_score(score): window.addstr(1, 2, f"SCORE: {score}") def create_title(): window.addstr(1, width - 14, "@SNAKE v.1.0") new_game() applex, appley = create_apple() while True: screen.clear() window.clear() draw_walls() update_score(score) create_title() c = screen.getch() direction = get_direction(c) x, y = get_coordinates(x, y) if collision_check(x, y): if not game_over(): x = 4 y = 4 direction = 261 score = 0 initial_delay = 100 old_coordinates = [(x, y),(x-1, y),(x-2, y)] continue old_coordinates.insert(0,(x, y)) if x == applex and y == appley: applex, appley = create_apple() score += 10 update_score(score) if initial_delay > 49: initial_delay = initial_delay - 2 else: old_coordinates.pop() pass for coordx, coordy in old_coordinates: window.addch(coordy, coordx, "@") window.addch(appley, applex, "O") window.refresh() screen.refresh() curses.napms(initial_delay) curses.wrapper(main)
STACK_EDU
Fog image choice with USB Is it possible to choose a menu entry with an USB key ? What I mean by that is : can I boot on a USB key and deploy an OS without the pxe menu ? [quote=“Valden, post: 22881, member: 21858”]Do you think, that is possible to download the usb image directly into the ram with tftp ?[/quote] It may be possible to load with tftp, but you have to make sure that enough ram is available for the image though… FOG isn’t designed to handle ISO image files. Do you think, that is possible to download the usb image directly into the ram with tftp ? The reason fog works is it’s already loaded the networking and is its own system before fog loads. During initial load though, it’s loading directly off of the USB. I’m glad we could be of some help but I would think it would work better to boot the ISO off of the USB as during the initial load you don’t have access to the network so loading items from the network would be impossible. I don’t even think pxe can load the ISO from a network location. Thanks you very much for the help ! For the OS deployment the the problem is solved. But for iso file, my path /tftboot/ has already all permissions, Should I configure a samba shared space? But I don’t understand one thing, I can deploy OS without problemes, therefore I wonder why I can’t do the same with iso files ? I think I miss something… vmlinuz is just an “executable” of the linux kernel, for lack of a better way to describe it. In that case, give it a go! Since you aren’t needing anything specific from the IT department, and everything is housed on machines you manage, set up a shared space, save your image there and find the right syntax to get it to load via the “USB FOG Server”. It’s just for a school research project, I don’t use the university servers. I’ve made a virtual machine for my fog server and I have a phisical computer for the USB boot. I had already set up a fog server, but that was without that USB system as my teacher wants. And for the kernel line ? Is there differents kinds of kernel ? because I saw memdisk vmlinuz … Sorry for my english, I’m french [quote=“Valden, post: 22694, member: 21858”]Does it works only on the usb key ? Is it possible to put a network path to the initrd ?[/quote] only one way to find out! It depends on your network, if it is ANYTHING like my network (I work in education), it will be troublesome. We make every user authenticate, if you do not authenticate you only get public rights. So you need to understand your network environment. From what I gather you are a student correct? I do not allow my students open space to store files and information on my server… you will need to check with your local IT department to find a place to put your files for access, possibly even create a special user and password. THAT BEING SAID. I have written scripts in the past to authenticate to a network storage drive on windows, so the task is possible, it’s just a matter of learning to write a script that can authenticate itself to a user, whose home directory is the storage space of the iso image. I would use something like append iso initrd=10.x.x.x/path/to/FILENAME.iso where 10.x.x.x is the IP address of the server, you may have to syntax it another way to get it to see the server too, this is only an example, and not a working one. Does it works only on the usb key ? Is it possible to put a network path to the initrd ? This post is deleted! You’ll have to have the ISO file available on the usb key. To work it properly you could create a menu entry like: append iso initrd=win7.iso raw MENU LABEL Windows 7 Boot the Windows 7 ISO from this menu. Needs lots of RAM and time behind to wait for Please Press any key to boot from… Of course add as many entries as needed and change the files as needed. the initrd will need the actual location. Just like the kernel is calling. So if you place the ISO in fog/win7.iso your initrd command will be: initrd=fog/win7.iso Thank you it works :D, but how can I do for an iso file ? You’ll have to manually set this up. What I mean is you’ll need to, probably, create a script on the key to setup the storage, type, Maybe follow a similar process for the Menu so you can just make a choice of upload/download etc… The FOG GUI will not be of help here, so you’ll always have to boot to a specific menu option. The GUI will only be useful in the creation of the Task so this works properly. It will also only be useful for the management of the images. Something along the lines of: APPEND initrd=fog/images/init.gz root=/dev/ram0 rw ramdisk_size=127000 ip=dhcp dns=10.10.10.1 mac=00:1e:ec:e3:dd:1b ftp=10.0.7.1 storage=10.0.7.1:/data/fogstorage/images/dev/ storageip=10.0.7.1 web=10.0.7.1/fog/ osid=5 loglevel=4 consoleblank=0 irqpoll chkdsk=0 img=win7activatedsysprepgen imgType=n imgid=6 PIGZ_COMP=-9 hostname=testname pct=5 ignorepg=1 type=up MENU LABEL Image Upload Upload Image to server. APPEND initrd=fog/images/init.gz root=/dev/ram0 rw ramdisk_size=127000 ip=dhcp dns=10.10.10.1 mac=00:1e:ec:e3:dd:1b ftp=10.0.7.1 storage=10.0.7.1:/data/fogstorage/images/dev/ storageip=10.0.7.1 web=10.0.7.1/fog/ osid=5 loglevel=4 consoleblank=0 irqpoll chkdsk=0 img=win7activatedsysprepgen imgType=n imgid=6 PIGZ_COMP=-9 hostname=testname pct=5 ignorepg=1 type=down MENU LABEL Image Download Download image from Server to host. Change all the IP’s to there respective values: Change the hostname to what you want the hostname to be. Change the imageName to what you want it to be. Change the imgType to: n = single disk resizable mps = multipart single disk non-resizable mpa = multipart all disk non-resizable dd = raw. Make sure the imgid matches that of a valid image. Make sure the img matches the name of a valid image. Change the MAC address to that of your client to be imaged. It’s not pretty, but it is doable. It’s for a school project, the task comes from my teacher, he wants that I find a way to boot or deploy an image on the server with an USB boot I may have found a way. I put syslinux on my USB key, and I copied the /tftpboot/fog repertory to the root of the key. I put the file pxelinux.cfg/default to the USB key root and I rename it syslinux.cfg. Then i can boot a file on the key but, I can’t boot a file on the fog server yet, here is an example of my default file on the USB key: append iso initrd=memtest/memtest.iso ip=dhcp dns=10.7.112.15 ftp=10.7.112.55 storage=10.7.112.55:/images/ web=10.7.112.55/fog/ How can I boot a file on the server with the menu on my USB key You can look into gPXE, you can set it up so the disc looks for your FOG server information, I used it as a troubleshooting step when trying to find the issue with my network. There are USB bootable version of gPXE so you can give that a shot if you want. However, I may be misunderstanding you. This will allow you a USB bootable to FIND your FOG server already set up. If what you are asking is for a “mobile” FOG server on a USB drive, unfortunately I have no suggestions for you, I have not looked into this. What do you mean? The init.gz and bzImage are the main components of FOG. These are the “workhorse” of the FOG System. If you download FOG, you will have the init.gz and bzImage files. I do not know specifically how you could get them working on a USB stick, but I’m certain it’s possible. I don’t know if anyone’s done this [b]AND[/b] written a tutorial on how to do it though. The init.gz is the root filesystem of linux that gets loaded and contains all the fog scripts needed for FOG imaging. The bzImage is the linux kernel that contains all the drivers needed to run your client systems. I know it’s not much help, but should give you a pretty good starting point. Is a topic exist on the init.gz and bzimage ? I’ve never had to try this, but I’d imagine, with use of syslinux and the init.gz/bzImage we’ve already created, this is plenty possible.
OPCFW_CODE
Which Herp Is Which? By writing to "pen pals" from the point of view of a reptile or amphibian, your students can be creative while learning about how these animals are alike and different. - List several examples each of reptiles and amphibians. - Describe how reptiles and amphibians are similar to and different from - Books and other reference materials on herps. - Index cards. - Pencils and crayons or markers. - Social studies, language arts - 1. Before the activity, write the names of several types of herps on slips of paper (one for each person). Try to include an equal number of reptiles and amphibians. Depending on the level of your group, you can keep the names general (e.g.: frog, snake) or make them more specific (e.g.: bull frog, garter snake). Write a number on each slip so you can keep track of who has which herp. - 2. Hand out the slips of paper you made earlier, taking note of who has which herp. Tell the students to keep the identity of their animal - 3. Assign each person a herp pen pal. Try to match reptile with amphbian - 4. Give the students time to find out about their herps. Then pass out index cards and have the students write "postcards" to their pen pals from the point of view of their particular herps. Explain to the students that they shouldn't give away their herp's identity, but they should give clues that will help their pen pals figure out whether their herp is a reptile or an amphibian. The information should also be as accurate as possible. (If you're working with more advanced students, you can also have them try to figure out what kind of reptile or amphibian their pen pal represents. You may want to provide a list of the herps you've assigned, to help the students narrow down their choices.) - 5. You may want to consider having the students write a series of postcards, with each one revealing a new clue about their identities. Here's an example of one postcard a frog or toad might write: Dear Pen Pal: Life is busy these days! I have been practicing very hard on my song. Spring will be here soon, and I have a lot of competition! - 6. On the other side of the postcard, have the students draw a picture of their herps' habitats (excluding the herps themselves!). - 7. Collect the postcards and hand them out to the appropriate pen pals. Provide resources and give the students time to figure out what kind of herp is their pen pal. - 8. Have several of the students read their postcards out loud, then ask for opinions on the kind of herp that "wrote" each card. Finally, have the various pen pals identify themselves. - 9. Use the postcards to create a bulletin board display. The focus of the display could be similarities and differences between reptiles and amphibians. (See the background information under Who's
OPCFW_CODE
M: Ask HN: learning Basic Math, Reading Math - justlearning Lately after couple of posts including the latest http://news.ycombinator.com/item?id=666407 , I gathered the courage to ask this:<p>problem description : I want to finish "Introductions to Algorithms" (Cormen et al) and the math is holding me back. I have been trying and am too impatient. Previous comments from (do you buy books for long term) and a couple more helped me understand that this is normal to spend couple of hours on a single paragraph/page. But sometimes I don't get it for the lack of basics (it's been a while away from academics). I know I should work hard at going back, but however back I go, it feels like I don't know it. the longer it takes, more depressing it is. Sometimes, I feel my whole life is going to be depressed in this pursuit.<p>My problem in one line: I want to READ Maths like I read english (when i read the word mountain- i visualize a mountain, when i read an equation i see greek letters with an equals sign... you get the idea?)<p>my self analysis - too impatient, want to code the pseudo-code eagerly, hate (or lazy?) to take the pen/paper, look for solutions without trying hard = typing/reading the solution rather than understanding, have no big picture of implementing math - project to work on (people reading math that helps in their work vs me doing math with no implementation makes my brain go to sleep.)<p>I have been trying to get into about math for a while. I have done my research and read few excellent blogs (including from atleast one member of HN) for know-hows. Books I know I should have to cover the basics:<p><pre><code> *Discrete Mathematics with Applications by Susanna S. Epp *How to prove it? *Concrete Mathematics - Knuth </code></pre> (I got along well with Knuth for the first chapter and after a while my lack of basics &#62;&#62; depression &#62;&#62; stop and do something else.)<p>Which books have helped you? which one should I pick first? How can I achieve my goal?<p>I need books basic enough to teach me discrete math (to achieve my goal of finishing introduction to algorithms) and in the long run read math like english.<p>I am looking for your experiences - obstacles you faced... something like this [ http://news.ycombinator.com/item?id=666615 ; i can't relate to few points - eg: not qualified to decide if a book is badly written]- anything i can relate to, any anecdotes? perhaps..my sub conscious will probably hold back my depression! Everyone of you have truly enchanting stories. I know I am taking your time, but I HAVE to do this.<p>disclaimer: i am not a comp sci student. &#60;cough&#62;my experience has been as an enterprise developer&#60;/cough&#62;. pls don't be hard on me.<p>...and thanks for your precious time R: yequalsx I agree with RiderOfGiraffes. You must do. Reading alone won't do the trick but I'm sure you know this. I have found that there are two types of mathematicians. 1\. Can not read a theorem without knowing every little detail. 2\. Can read a theorem, accept that it is true without knowing every little detail and being able to justify every little detail. You sound like type 1. For type 1 person I recommend taking courses and asking the teacher lots of questions. I am type 1 and it took me a long time to really understand how to read a mathematics book. I learned this in graduate school because my teachers mostly sucked at teaching. Good luck. R: yequalsx Let me add that one way to become a type 2 person is to play a game with a theorem. Say you encounter something you don't really understand. The book says it is true but you don't know why it is true. Forget about why it is true and see if you can do problems assuming it is true. Say to yourself, "Provided this is true what must logically follow from it?" See if you can use the result even though you don't know why it is true or where it came from. R: justlearning thanks, good to know (both about types) and to get over to type 2. R: vomjom Frankly, CLRS goes over a lot of the basic knowledge you need to get through the book. I don't think a discrete math textbook will help you if your main goal is to get through CLRS; the math in that book isn't that dense. Like all things, it just takes practice. Spend a lot of time on sections that are hard to understand, and future sections will become a lot faster and easier. Try the MIT algorithms video lectures: [http://ocw.mit.edu/OcwWeb/Electrical- Engineering-and-Compute...](http://ocw.mit.edu/OcwWeb/Electrical-Engineering- and-Computer-Science/6-046JFall-2005/VideoLectures/index.htm) R: Arun2009 This is a topic I'm really interested in, and I _do_ sympathize with your situation. I have no advice for you, but here's my experience. I hold a Masters in EE (control theory). I have never really found studying Mathematics formidably intimidating, probably because I've been at it for some time now. But I totally suck at the following things: \- Solving problems that require really inspired "crux" moves. Think IMO level problems, lighter versions of Euler's solution to the Basel problem (<http://en.wikipedia.org/wiki/Basel_problem>) etc. \- Coming up with abstractions and new Mathematics. Think (again) Euler's invention of graph theory or Cantor's invention of Set theory. \- Recalling what I've learned just months after their immediate use is over. For example, I had to cover quite a bit of math for my BEng. But if you ask me now about real analysis, PDEs, stochastic processes or fourier transforms, I'd be stumped. I strongly suspect that this is because we haven't analyzed and understood the problem of learning Mathematics well. To give you an example, 200 years ago people were trying to get fit. But it was not until the invention of systematic strength training with free weights and machines, etc., that we could really do a reasonably good job of it efficiently. Early body builders look like sticks compared to monsters of today. I am hoping that such a revolution happens in education as well. Before we learn, we must really learn how to learn. Our knowledge of that is very limited now. R: michael_dorfman First, regarding CLRS: have you looked at the videos of Leiserson & Demaine's MIT course on the subject? Watching video lectures might get through where the book doesn't. I was going to recommend the Knuth book, so I'm trying to better understand your problem. Where, in the first chapter, did your "lack of basics" manifest? If you spell it out in a bit more detail, perhaps we can make more specific recommendations. Needless to say, it helps to make sure your expectations are reasonable. "Reading Maths like you read English" is probably never going to happen-- every mathematician I know can knock through a novel in half the time it takes to read the equivalent number of pages of a mathematics book, and that's in the best case. R: justlearning "Where, in the first chapter, did your "lack of basics" manifest?" - in the towers of hanoi, there are several scattered equations(though Knuth starts with basics) with mathematical "syntax" that makes me impatient. What makes me 'hallucinated' is that even after I read the clear explanations of Knuth, I can't relate to the math equation just below the para. R: psyklic To start off, I recommend that you get more practical-oriented books which contain lots of exercises. There are actually some books which try to explore discrete math using C or Mathematica/Maple. Those might make the subject a bit more interesting and concrete. I've found that (at least with computer books), for me the mindless visual quick-start texts are best to begin with. They make it more fun to explore the material even though the books have less "meat" for your buck. Even though I am advanced enough to sit down with a manual and understand it, these books are much more motivating for me. R: RiderOfGiraffes One person's opinion ... You will never read math like English. I'm a Ph.D., I've had this discussion with many other graduates and post-graduates. Math is different from English (or any other natural language). In my circles of math, programming and education there is even a phrase "Read like Math" as opposed to "Read like a comic book". Math is its own language, and it is dense. When interpreted into English a single equation can become several paragraphs. Learning to read and visualize an equation is possible, but always necessary. Simple equations are easy, but if there's real content, it's hard. You need to ask what you're trying to accomplish, really. If you want to learn stuff, you need to do it. You can learn about it by reading about it, but you can only learn to do it by doing it (under guidance or with hints). See also: <http://news.ycombinator.com/item?id=672067> [http://www.scientificblogging.com/carl_wieman/why_not_try_sc...](http://www.scientificblogging.com/carl_wieman/why_not_try_scientific_approach_science_education) EDIT: added reference: <http://c2.com/cgi/wiki?MathForProgrammers> Quoting from the this last: I'm not certain that reading like prose is a benefit. When algebra was first invented, people didn't use variables, operators or other notation. They just wrote things out in natural language. Here is an example from Al-Khwarizmi's famous book: If the instance be, 'ten and thing to be multiplied by thing less ten,' then this is the same as 'if it were said thing and ten by thing less ten. You say, therefore, thing multiplied by thing is a square positive; and ten by thing is ten things positive; and minus ten by thing is ten things negative. You now remove the positive by the negative, then there only remains a square. Minus ten multiplied by ten is a hundred, to be subtracted from the square. This, therefore, altogether, is a square less a hundred dirhems. Compare this to (x+10)*(x-10) = x*x + 10*x - 10*x - 10*10 = x*x - 100 Now imagine trying to do even simple calculus like this. If there's a lot of essential complexity, I claim that natural language becomes much harder to understand than specialized notation, which is why notation exists in the first place. A good notation is understood on its own terms, not in terms of translation to some natural language statement, for instance, algebraic manipulations are done according to certain rules, without having to re-justify them each time, because of the underlying soundness of algebra. R: justlearning Thanks, How then could you make math as lucid as possible to understand. "Learning to read and visualize an equation is possible" - any tips on how? sorry for the late reply - i wake up to the chinese sun. R: papaf Another online course that starts with the basics: <http://math.ucsd.edu/~ebender/DiscreteText1/>
HACKER_NEWS
Web developers can save hours of time by reusing codes and finding free snippets online. Here are the best sites to browse for totally free CSS3 code snippets. Web developers can save hours of time by reusing codes and finding free snippets online. But this can be tough if you don’t know where to look. So for this post I’ve organized the best sites to browse for totally free CSS3 code snippets. You can find everything from simple buttons to whole webpage layouts and pretty much everything inbetween. 1. Web Code Tools As a resource for CSS snippets you have to check out Web Code Tools. This site offers custom CSS3 code generators to save you time building gradients, filters, and pure CSS animations. But this site is also a massive resource for all frontend dev languages. You can find lots of generators and code snippets for HTML5 elements, microdata, and OG(open graph) snippets for your page header. If you’re looking for a quality resource to find CSS & frontend snippets then Web Code Tools is a must-bookmark site. The best online repository to browse through for code snippets has to be CodePen. It’s a free online IDE that also works as a library showcasing cool dev projects made by developers worldwide. What I like most about this site is the quality of content. Yes there are many simpler pens with useless snippets. But you can also browse popular pens to see which designs are trending and gaining traction. Whether you’re looking for pure CSS3 or a CSS/JS mix I guarantee CodePen has snippets for everything you could ever need(and a lot more!). 3. CSS Flow CSS Flow curates UI kits and design resources and it has a snippets area with free hand-crafted codes. These snippets are mostly geared towards UI elements and they’re entirely coded in HTML and CSS/Sass. You’ll find stuff like toggle switches, signup forms, CTA buttons and even a todo list. Each snippet has a live demo you can view in your browser before downloading. Most snippets date back quite a few years and the site hasn’t been updated recently. However these snippets are still fully compliant with HTML5/CSS3 specs and they look incredible. 4. Code My UI The code-based inspiration gallery Code My UI is the perfect curated resource to find CSS snippets. Every post is hand-picked and organized by most the most recent snippets found all over the web. You’ll find typography designs, custom layouts, button styles, and pretty much everything you’ll need for a sweet website. At the top of the page you can sort by category or by search term. This way you can whittle down the results to find exactly what you’re looking for. Not many sites can compete with Chris Coyier’s CodePen, but if there’s any other site I could recommend it’s Codepad. Everything on the Codepad front page is voted by users. You can setup a new playground and submit your ideas too, all with an online IDE for HTML/CSS/JS code. The free CSS snippets vary wildly from useful snippets(buttons, layouts) to more diverse snippets mostly created to show off. This pure CSS WinXP loader is a nice example. But if you’re looking for another site with user-generated code I highly recommend Codepad. Nobody could’ve guessed how quickly Twitter’s Bootstrap would’ve grown in just a few years. It’s easily the #1 frontend framework and with sites like Bootsnipp you can save time by using pre-built code snippets. Most Bootstrap code is repetitive so templates are very popular. With Bootsnipp you can browse hundreds of custom projects built on Bootstrap. You can browse by tags or by the specific Bootstrap version ranging from v2.3 up to the newest v4.0. If you want to create and share your own codes you can sign up for free and publish your Bootstrap snippets for the whole world. This site offers the BS dev community a huge value and it grows larger every day. Some devs prefer a wide choice of frameworks outside Bootstrap. That’s where a site like Codeply comes in handy. This free resource lets you tinker with dozens of frontend frameworks like Foundation, Pure, Materialize and others. You can build custom layouts right in your browser and save them as free snippets for the world to clone. You can browse all snippets by framework or by tags, all of which make navigating the site a lot easier. It doesn’t have the simplest layout but you should find a lot of cool stuff in here. 8. Little Snippets Little Snippets gathers the best codes published to sites like CodePen into one place. If you don’t feel like wading through hundreds of pens you can just use Little Snippets instead. Every snippet on the site is built with HTML/CSS so this is great for front-end developers. It also has a category page to browse by interface elements like buttons, icons, or nav menus. This site isn’t as populated as others but it’s still a fantastic resource to cut through the dirt and find the diamonds. 9. Enjoy CSS If you’re more interested in CSS code generators then take a look at Enjoy CSS. It has a super clean interface and plenty of free CSS code generators to make any developer happy. You can build custom gradients, box shadows, transitions & transforms all with a GUI interface. Plus the site has smaller galleries of code snippets for reusable elements like buttons. Enjoy CSS is like a one-stop shop for all your CSS needs. This is the perfect resource for frontend devs who want non-JS solutions. Every code snippet includes the source code and you can make edits right in your browser. It does not have the same volume as sites like CodePen but it’s still an excellent resource to check for free CSS snippets. All the sites in this post are fantastic and they each offer a slightly different style of custom CSS snippets. But all of these resources will be around for a while and you can bet thousands of new snippets will be added in the years to come.
OPCFW_CODE
iOS Core Data: Confused about Core Data and database I am really confused on what the Core Data actually is. Or I guess my question is, when dealing with a database, would you use Core Data? Like if I wanted access values from a database would I be using Core Data to access those values? How would I approach this problem? Thank you so much for the help. Core Data is an framework that does the work of "object persistence". In other words, it's code you can use that takes care of saving a collection of objects to disk and loading them again later. It does a lot of work to allow you to store lots of data and load only the objects you need at a time, and unload ones when memory is tight. Core Data can use a database to accomplish this, but that's it's business, not yours. When you use Core Data, it is a black box. You tell it to save the data and then step out of the way. If you want to interact with an existing database, such as a MySQL database on a web server, that's a completely different matter. You might use Core Data to store local copies of objects on your device, but in that case Core Data won't care that the objects are copies from some other database. It doesn't care. Okay thanks, so my question is now, how would I interact with a database, what route would I go? And what kind of database would work best? Or if I wanted to store values onto a database server over the network how would I approach this. Thank you so much! You seem to be asking about a web service, the back end - you would communicate with a "database server" using JSON over HTTP or similar. Maybe you should look at parse.com or other backend service to get started with whatever you want to do. Yes, if you want a local database on your device, Core Data is the appropriate technology. It probably makes sense to start your research with the Core Data Programming Guide. You can alternatively use SQLite (which Core Data uses in the backend), but Core Data offers some material advantages and is the preferred database interface for iOS. If you decide to pursue SQLite for any reason, though, I'd suggest you contemplate using the FMDB Objective-C SQLite wrapper. But Core Data is generally the way to go. +1 on the Core Data Programming Guide. I would say that it is compulsory to read if you intend to do anything serious with Core Data. But what do you do if you need to take advantage of a remote database? Apple does a poor job of explaining how Core Data fits in - or doesn't in that case. Thanks in advance. @AlexZavatone They don't really discuss because there really is no integration between an iOS app and a server's database. You generally have a web service running on your web server that communicates with the database on the server. The iOS app's integration with the server is limited to making requests of the web service, and is completely isolated from the server's database. Yeah, advanced techniques in this area seem to be a rather open area. I'm surprised that techniques on this front are not more clearly explained. I'm already doing that, but there should be an overlying "Core Data" that also includes access to external databases that can handle different needs of the device to database communication. I think this is a glaring omission on Apple's part. Thanks for the info.
STACK_EXCHANGE
from random import choice, uniform from math import pi, cos, sin from data.mobs import * from .mob import Mob from .bubble import Bubble from .utils import * #_________________________________________________________________________________________________ class Infusoria(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *INFUSORIA_PARAMS.values()) def handle_injure(self, damage): super().handle_injure(damage) if damage and self.health == 1: bubble = Bubble(self.x, self.y) bubble.gravity_radius = 1.3 * self.player.bg_radius bubble.vel = 0 self.game.room.bubbles.append(bubble) #_________________________________________________________________________________________________ class Ameba(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *AMEBA_PARAMS.values()) #_________________________________________________________________________________________________ class Cell(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *CELL_PARAMS.values()) #_________________________________________________________________________________________________ class Baby(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BABY_PARAMS.values()) #_________________________________________________________________________________________________ class Turtle(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *TURTLE_PARAMS.values(), random_shift=320) #_________________________________________________________________________________________________ class TurtleDamaging(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *TURTLE_DAMAGING_PARAMS.values(), random_shift=320) #_________________________________________________________________________________________________ class Bug(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BUG_PARAMS.values()) #_________________________________________________________________________________________________ class Scarab(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *SCARAB_PARAMS.values()) #_________________________________________________________________________________________________ class Gull(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *GULL_PARAMS.values()) #_________________________________________________________________________________________________ class Ant(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *ANT_PARAMS.values(), random_shift=400) #_________________________________________________________________________________________________ class Spreader(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *SPREADER_PARAMS.values()) #_________________________________________________________________________________________________ class BigEgg(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BIGEGG_PARAMS.values()) #_________________________________________________________________________________________________ class BossHead(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BOSS_HEAD_PARAMS.values()) self.y_offset = H(213) self.body_rect.w = H(620) self.body.angle = -0.5 * pi def update_pos(self, dt): self.body_rect.center = self.x, self.y + self.y_offset def update_body_angle(self, dt): angle = calculate_angle(self.x, self.y, self.player.x, self.player.y) if angle > 0.5 * pi: angle -= 2 * pi min_angle = -0.7 * pi max_angle = -0.3 * pi if min_angle <= angle <= max_angle: min_angle = max_angle = angle if self.body.angle < angle: self.body.angle = min(self.body.angle + 0.0003*pi * dt, max_angle) else: self.body.angle = max(self.body.angle - 0.0003*pi * dt, min_angle) def collide_bullet(self, bul_x, bul_y, bul_r): return circle_collidepoint(self.x, self.y + self.y_offset, self.radius + bul_r, bul_x, bul_y) def update_body(self, dt): turret_target = self.gun.turret_target player_pos = self.player.x, self.player.y body_angle = self.body.angle if self.body_rect.colliderect(self.screen_rect): for i, circle in enumerate(self.body.circles): if circle.is_visible: target = turret_target if 16 <= i < 24 else player_pos circle.update(self.x, self.y, dt, *target, 0, body_angle) self.body.update_frozen_state(dt) #_________________________________________________________________________________________________ class BossHandLeft(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BOSS_HAND_LEFT_PARAMS.values()) #_________________________________________________________________________________________________ class BossHandRight(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BOSS_HAND_RIGHT_PARAMS.values()) #_________________________________________________________________________________________________ class BossLeg(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BOSS_LEG_PARAMS.values()) #_________________________________________________________________________________________________ class Terrorist(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *TERRORIST_PARAMS.values()) #_________________________________________________________________________________________________ class BenLaden(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BENLADEN_PARAMS.values()) #_________________________________________________________________________________________________ class BomberShooter(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BOMBERSHOOTER_PARAMS.values()) #_________________________________________________________________________________________________ class Mother(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *MOTHER_PARAMS.values()) self.generation_time = 6000 self.generation_cooldown = 7000 self.child_params = choice([GULL_PARAMS, BUG_PARAMS, SCARAB_PARAMS]) def update(self, dt): super().update(dt) if not self.is_paralyzed: self.generation_time += dt if self.generation_time >= self.generation_cooldown: self.generation_time = 0 self.generate_mob() def generate_mob(self): mob = Mob(self.game, self.screen_rect, *self.child_params.values()) mob.xo = self.xo mob.yo = self.yo mob.x = self.x mob.y = self.y mob.polar_angle = self.polar_angle mob.body.update(mob.x, mob.y, 0) self.game.room.mobs.append(mob) #_________________________________________________________________________________________________ class Cockroach(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *COCKROACH_PARAMS.values(), random_shift=400) #_________________________________________________________________________________________________ class Beetle(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *BEETLE_PARAMS.values()) #_________________________________________________________________________________________________ class Spider(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *SPIDER_PARAMS.values()) #_________________________________________________________________________________________________ class Turret(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *TURRET_PARAMS.values()) random_offset = uniform(0, 800) random_angle = uniform(0, 2*pi) dx = random_offset * cos(random_angle) dy = random_offset * sin(random_angle) self.move(dx, dy) self.gun.set_turret_target() self.body.update(self.x, self.y, 0, *self.gun.turret_target) def update_body(self, dt): if self.body_rect.colliderect(self.screen_rect): self.body.update(self.x, self.y, dt, *self.gun.turret_target) self.body.update_frozen_state(dt) #_________________________________________________________________________________________________ class MachineGunner(Mob): def __init__(self, game, screen_rect): super().__init__(game, screen_rect, *MACHINEGUNNER_PARAMS.values()) #_________________________________________________________________________________________________ mobs = { "BossHead": BossHead, "BossHandLeft": BossHandLeft, "BossHandRight": BossHandRight, "BossLeg": BossLeg, "Gull": Gull, "Bug": Bug, "Scarab": Scarab, "Ant": Ant, "Cockroach": Cockroach, "Mother": Mother, "Beetle": Beetle, "Spider": Spider, "Spreader": Spreader, "Infusoria": Infusoria, "Baby": Baby, "Cell": Cell, "Ameba": Ameba, "Turtle": Turtle, "TurtleDamaging": TurtleDamaging, "MachineGunner": MachineGunner, "Turret": Turret, "Terrorist": Terrorist, "BenLaden": BenLaden, "BomberShooter": BomberShooter, "BigEgg": BigEgg, } def get_mob(name, game, screen_rect): return mobs[name](game, screen_rect) __all__ = ["get_mob"]
STACK_EDU
Scheduling subsystems (crontab, at) and the desktop rodrigo at gnome-db.org Fri Jul 23 14:00:30 EEST 2004 On Fri, 2004-07-23 at 11:33 +0200, Maciej Katafiasz wrote: > W liście z czw, 22-07-2004, godz. 14:55, Maciej Katafiasz pisze: > > Just to let you know, we're working on modern replacement for cron, but > > with much wider scope that just scheduling and fully DBUS based > > I'm going to prepare much more detailed writeup in next few hours or so, > > so stay tuned :). > As promised, here is a tad longer description of Eventuality framework. > If I'm not clear enough, please ask, I'll gladly provide you with > answers, but I hope that together with what was already said in this > thread, you'll get good overview of the idea. > Eventuality is framework designed for, amongst other things, replacing > legacy cron jobs as means of performing prescheduled actions. This is > however only small part of what is the intended functionality for > Basically, framework is functionally split up into three parts: > - one that lets "producer" apps register actions they support. For > example, movie player can have "PlayMovie()" action, with parameters > like "URL" (string), "Fullscreen" (boolean) and "Volume" (float). > Interested apps can later query for supported apps, and use them in > their own events this is great! We miss though, if you are going to use d-bus, a good query mechanism, so that we can query all the installed actions without having to start individual applications. > - although desktop oriented, Eventuality is meant to be sysadmin's tool > as well. That's why we'll struggle hard to not to require any > GUI-specific hacks, nor other things that would make it unwieldy for as you said in the document you posted before, the UI can be dinamically generated from the description of the methods returned by each individual application. That should work pretty well for simple types, but not so well for other types. So I would suggest there is additional information on each argument, like the allowed range, the set of valid values, if it's a file, what kind of file it is (if it's an image, for instance, show a preview, etc), ... That would solve the UI problem, since building the UI automatically for any desktop/toolkit would be quite easy. all this sounds really good, and something I have been wanting for long (the scriptability stuff specially), so, when are you releasing it? :) No, seriously, is there any code written yet? where is it? More information about the xdg
OPCFW_CODE
My name is Matthew and I’ve always like computers and tinkering with them. At 16 I went to college and started a computer programming A-Level. Even then using the Pascal language I was creating games. Generally remakes of old classics. In 2008 I learned about xbox360 indie games and started learning XNA. I released my first game called “The gravity effect” the following year and established Riddlersoft Games. Since then I have made around 10 games with varying success. All these games where made in my spare time more as a hobby than anything else. Typically I work at my other job which is a cleaning job, then in the evenings I program. I have made many different types of games, but because I work on my own I stick to 2D games. My most popular games to date have been the “Old School Racer” games. A 2D side scrolling motorbike racing game. Generally for me when I have an idea for a game, I quickly make a prototype to see how fun the game is. Then its the process of fleshing the idea out a bit. I keep playing what I’ve made, showing it to friends and then implementing feedback and new stuff. The game changes a lot from the initial prototype to the finished project. I’ve been working on Apex for the last few months. I originally released it about 7 years ago and thought I would just port it to switch. But as I played it I realized I need to change a lot. Then the person who does art for me suggested a few things and it snowballed into a complete new game. So I’ve been gradually adding new mechanics and features. The art has had a complete overhaul, most if not all has been replaced, as well as a lot of new art. Then we started thinking about the game and a story just developed over time. I’ve never had much success getting my game out there. Because I’m a very small indie developer most websites and influencers just don’t respond to my emails, or messages. So about a year ago I started listening to podcasts weekly while at work. This enabled me to get to know a lot about the people who run them. Because of this I’ve been able to get involved in the community and get to know people who have contacts in this area. Apex will be a good test to see how well I can do getting the word out about my game. I’ve always had an interest in computers and as soon as I learned programming I fell in love with it. Since then I’ve found it relatively easy to make games as I find it fun and rewarding. My main goal is to make something that people will enjoy, so I always like it when a game gets a good response. I have found that before you even start make that amazing game you have planned that its best to start with a smaller project that you can learn from. Make a game from start to finish. You will learn so much about what to do and what not to do. As well as that your experience will grow allowing you to make even better games. If you compare my first game to Apex there is no comparison. I can’t stress enough how important practicing your craft is. Also always listen to what people say about your game, and get it out to as may people as you can before you release it. My biggest regret is not listening to people as my as I should have. Its so easy to see your own project only positively and not see the negative. I also found that near the end all I wanted to do was release it and this sometimes made me release something that If i had spent a few more weeks would have been substantially better. My monthly revenue is very small as I have only recently started making games for switch. So far I have only released Old School Racer 2 on the switch, Even so my revenue for the switch is far better than any other platform I’ve released games on. My main plans for now are to finish Apex. Once it’s complete I’m going to start learning the Unity engine so that my games will look and play even better.
OPCFW_CODE
TCP/IP Sockets in C# PDF ( Pages)NET 2. NET Socket Class 2. Net frameworks. It is a unique combination of well written concise text and rich carefully selected set of working examples. For the beginner of network programming, it's a good starting book; on the other hand professionals could also take advantage of excellent handy sample code snippets and material on topics like message parsing and asynchronous programming. Net Frameworks Team, Microsoft Corporation. Socket Programming Tutorial In C For Beginners - Part 1 - Eduonix About For Books TCP/IP Sockets in C#: Practical Guide for Programmers (The Practical Guides) For Is the problem at your server. Create socket, get output stream: lines 16-17 3. If the connection can be made without blocking, connect returns true ; otherwise. The resulting state of the queues is shown in Figure 6.Stabo comm manual. Distributing your curated content through a newsletter is a great way to nurture and engage your email subscribers will developing your traffic progrxmmers visibility. Game Brazil v Mexico - 10pm. Separate - Don't cross-contaminate? Encode, and send the returned response message. Send request to the server: lines 28-34 4! Civil : T. The DM74LS selects one-of-eight data sources. Send the datagram: lines 32-47 Since datagrams may be lost, CompressProtocol. Our protocol implementation, we must be prepared to retransmit the datagram. Programmerz the Fairytale Castle together and place in the centre of the game. The getData method returns the byte array associated with the datagram. How to manually updates for windows 8. Of course, the size of ugide thread pool needs to be tuned to the load, the service procedures, a large send may not complete successfully. May. If the receiving program does not call read . The other methods allow each operation to be tested individually. NIO selectors do all of this. From Figure 1? No part of this publication may be reproduced, or otherwise-without prior written permission of the p. As the number of threads increases, but it will help show your expertise with your followers. Not only will it drive traffic and leads through your content, more and more system resources are consumed by thread overhead. I applied a firmware upgrade 3. The typical TCP client goes through three steps: 1.TCP does not preserve read and write message boundaries. A stream is simply an ordered sequence of bytes. Sindrome mielodisplasico pdf Who's the Princess was created to help children with colours, matchi? DX controller. In Device Manager window, expand categories and locate the. The decision to use broadcast or multicast depends on several factors, including the network location of receivers and the knowledge of the communicating parties. Instances of ExecutorService can be obtained by calling various static factory methods of the convenience class Executors. This is discussed further in Section 2. The terms client and server are descriptive of the typical situation in which the server makes a particular capability-for example, a database service-available to any client that is able to communicate with it. Selector: Creating and closing static Selector open boolean isOpen void close You create a selector by calling the c factory method! The Sound Devices 6-Input Field Production Mixer and Digital Recorder is all lighting conditions and combination push-button knobs for menu navigation. Lrogrammers can enter several keywords and you can refine them whenever you want. Sign up with Facebook Sign up with Twitter. I don't have a Facebook or a Twitter account. Research and publish the best content. Try Business. Join Free. No tag on any scoop yet. Scooped by bxjvncm onto bxjvncm. Food safety is a scientific discipline describing handling, upgrade, and storage of food in? Online Companion Materials. Garland guides Zidane through it and will explain the memories the party walks through. You can simply confi. Communicate by sending and receiving instances of DatagramPacket using the send and receive methods of DatagramSocket. At this time the only subtypes lrogrammers InetAddress are those listed, and access to its services is readily available to most students and their programs. Thus, and so on, but conceivably there might be others someday. The Internet has become a part of everyday life.The accept method returns a SocketChannel for the incoming connection. In every step of food preparation, pf the four guidelines to keep food safe: Clean-Wash hands and surfaces often. Electronic Cash Register. Separate-Don't cross-contaminate. The other methods allow each operation to be tested individually. Levels 1 - Levels 1 - 20 should train on chickens, getting 25 exp per kill. Section 6. This book is based on Version 1.
OPCFW_CODE
In keeping with the general theme of this web log, I would like to illustrate the principle that Frederic Bastiat so keenly illustrated in his parable of the broken window. A good economist will take into account the effects of an action or policy on all groups, across all time scales. With that in mind, let's take a look at a new policy proposed by Rep. Alan Grayson (D) from Florida. The Paid Vacation Act will: "...require companies with more than 100 employees to offer a week of paid vacation for both full-time and part-time employees after they’ve put in a year on the job. Three years after the effective date of the law, those same companies would be required to provide two weeks of paid vacation, and companies with 50 or more employees would have to provide one week."Sweet! Who doesn't like vacation, right? The idea here is that more vacation days increase worker productivity and happiness, and result in people using less unnecessary sick days. These developments are claimed to stimulate the economy. They certainly sound good. What worker wouldn't want more paid vacation time? These workers would most probably be happier and may be more productive as a result. Here is what we'll see if the bill is passed: more people on vacation and a boom in the travel industry. Will overall productivity increase? Let's look at what is unseen: - Requiring businesses to provide paid vacation raises the costs for business because workers are being paid not to be productive. If the cost of business increases, it becomes more difficult for employers to hire new workers. Some workers might even have to be fired for the business to continue running at a profit. We don't see the workers retained or barred from entry. - Employees are currently producing during what would become paid vacation. The money they earn from their production may not go to the travel industry, but it would go somewhere - and the community will be richer for it. For example, the workers in a TV manufacturing plant will have produced 100 televisions and have money in their pockets to spend on home improvements. Now the community has more TVs and job growth in the home building sector. We wouldn't see either of them. Just for kicks, lets examine unintended consequences as well. Mandatory paid vacation time effectively subsidizes an unproductive activity, and doesn't remove the incentive for people to use sick days. This is what happens when subsidies are introduced into the marketplace: demand goes up because the good is perceived as "free". Once workers get a taste of paid vacation time they are likely to demand more of it. Thus a more likely scenario is that people will begin to use both paid vacation and sick days, and in time demand an increase in the number of paid vacation days. The article cited unwittingly points out the truth of this. "France currently requires employers to provide 30 days of paid leave."The Paid Vacation Act will restrict employment capacity, lower productivity, and restrict the community's ability to accumulate wealth. At least we'll see more Americans taking vacations during these tough times.
OPCFW_CODE
Drive by the need to test out a DVB-T capture card for future use in our HTPC (what’s the point in having a fancy HDTV without access to HD broadcasts right?), and the desire to replace the 14″ CRT TV in my room with something at least slightly more modern, I realised that with all the spare hardware I have, I might as well order up a card and get to it. Based on my associate Troy’s recommendation, I ordered the Hauppauge HVR-4000 Quad Mode tuner card. I got mine from Playtech, where they retail for a little over $200 inc GST. The spec page listed above tells the whole story, but basically it’s a PCI card with analog TV, DVB-S (satellite), DVB-T and FM tuners. It comes bundled with Cyberlink PowerCinema software, FM antenna, IR receiver and a Microsoft Media Center clone remote. Batteries ARE in fact included. First up: it’s handy to be aware of some of the local resources available, these will prove invaluable if you run into problems. Geekzone has a fairly busy forum based around DVB-S and DVB-T matters in NZ, with a pretty good signal-to-noise ratio for the most part. Fossie, one of the big posters on the Geekzone forums has put all his info on his blog, and is keeping it updated. This page is very valuable. Cranz has a very detailed site on how to setup MediaPortal. I plodded through it myself, but I link it for reference of others. It’s fairly straightforward if you’re technically minded and have at least some idea of installing and configuring new hardware. Current test hardware is an entry level box with: *ASUS M2N-MX SE Motherboard -nForce 430 Chipset with GeForce6100 VGA onboard *AMD Semperon 3200+ -PC2-5300 @ 333MHz *80GB Seagate Barracuda 7200.10 SATA HDD *XFX GeForce 7600GS 256MB Dual DVI+TV PCI-E card *Philips 190C 19″ LCD @ 1280×1024 Results so far: *Trying to use onboard video is fruitless. Even cranking it up to 256MB, pointless. You can just get 720p streams to play, but they are jittery as hell. the 1080i of TV3, no way. Usually results in a lockup. *With the 7600GS – joy, it works, sometimes a very very slight jitter, but it varies from channel to channel. Long and the short of the above two points, you need a card with good H.264 decoding skills. *PowerCinema wont give me the AAC LATM sound for some reason. Also, it’s a piece of shit, but at least it seems snappier than the other two. *GB-PVR is a piece of shit. Complex to configure, barely documented, works in a generally stupid way. *MediaPortal is better, but still a bit tricky to setup, but at least it’s got better documentation and a wider community. *EPG (Electronic Programme Guide). This is the KILLER app of any PVR. Not knowing when shows are on SUCKS for recording purposes, and most PVR software makes it hard for you to record anything unless it’s in your guide. The advantage of digital TV is that the guide is built in along with the signal, unlike analog TV, which is a stone cold bitch to get a solid, stable guide source for. In fact, it’s unsupported by any official channels here in NZ, and many other places. Thanks Microsoft (and HP/Dell etc) for selling us products we can’t make full use of here! Anyway, I digress….with digital TV, you tell your capture card to grab the EPG over-the-air, and all is well in the world! 😀 Apart from the fact that the quality and depth of detail varies greatly, but I can only hope this improves with time. *The UI of all 3 bits of software, at default skin settings, are all ugly as fucking sin. Cheap rip offs of WinMCE, which at least as some kind of slickness to it. MediaPortal is your best bet here with a far wider skin range. *If you’re installing on Windows XP, you need THIS hotfix. In WinMCE it’s included in a rollup, but not on XPSP2. Note, I used a particular version because doing it the standard way will result in the updated file being replaced by the old version thanks to Windows trying to be secure. 🙂 Once I find the page that told me how to do that again, I’ll link it later. *Codecs: you need to be able to support H.264 and AAC LATM audio. Next steps: Trying same DVB-T card and 7600GS in more powerful hardware, recording, and trying to wrangle Windows Media Center (XP and Vista versions), and some kind of linux PVR setup into playing the game. Also, remotes, but this should be trivial. I have both the remote that came with the Hauppage (MS clone), and a genuine Microsoft MCE remote here. That, and I’ve already tried the MS one with MediaPortal, as it has the best support templates it seems.
OPCFW_CODE
Cannot Figure Out Transform Constraints I'm tearing my hair out trying to do something that I believe should be dead-simple, and I can't figure out what I am missing. Assume three bone-chain in sequence: A, B, C. They are not connected, but in a parent-chain and floating. I want to control this chain like an IK limb. The restriction is I can not modify the original armature at all. No reparenting, no new bones, etc. So, I create another rig with an IK Limb set up with corresponding controller bones. Here is an example. The green bones are the original armature. The blue one is the controller armature. So the task is simple, the green bones should be able to follow the transforming positions and rotations of their corresponding blue bones. No, Child-Of constraints are not possible here, because the green bones are parented to each other as I said. So when a child green bone is moved to follow a blue bone, and then its own parent is also moved, this causes it to move out of synchronization. So we can try a Copy Location and Copy Rotation in offset mode, so that the green bone moves together with the blue bone, and when the blue bone rotates, the green bone will experience the same transformation. But this does not work. When setting up the Copy Rotation constraint, even though the Blue Bone hasn't moved at all, the corresponding green bone has mutated its rotation away from pointing forward, as shown. It appears that Blender is bugged, because performing this same constraint setup (Copy Loc + Copy Rot) onto 2 cubes with the offset mode as object constraints creates the desired behavior perfectly. Share your file. https://blend-exchange.com/ What are you talking about? Could you edit the question and explain what it actually is that you are trying to do and why? Please provide screenshots and pictures, .blend file if possible. Provide context as well - what is this for? Why do you need it? Why are you doing it this way(whatever way it is)? At the moment it's not clear what your problem is because you have not described what you are trying to do and this question is not likely to get any meaningful answers. I'm flabbergasted. I wrote entire paragraphs describing what I'm doing, and the problems encountered doing it. If you want a blend file or screenshots, that's fine, but please do not insinuate I did not explain what I am doing, because I did precisely that. As R-800 says, please share your file You seem to have set up a lot of very strange rules and given yourself a nearly impossible situation. Why do the green bones have to be parented to eachother if they aren't going to use eachother's transforms at all? Why can't you modify the blue armature?
STACK_EXCHANGE
Starts only with debug v4 During normal startup nothing happens, the web interface is not available. When using debug everything works and the web interface too. Hi @Johnnykson, Could you post what is the latest error_<date>.log under ./log folder please? There should be some issue with Gunicorn and I'm having issue even with fresh OS :( Just wanna make sure you're just testing it, not using it in production right 🤣 ? OS: Ubuntu 20-22.04 LTS Python Version: Python 3.10.12 There were errors during the first installation. I think I fixed them, now ./wgd.sh install is successful. There is no error_<date>.log in the ./log folder. install.txt And yeah, I just want to see version 4, I'm curious 🤣🤣 Hi! Do you still remember what the error is you occurred in the first place? And after the success install, is sudo./wgd.sh start works for you? The and port and IP is the same with debug mode I looked at the history, there was no error initially, i restarted with another version. I tried to install on a clean machine (Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-117-generic x86_64)). The error is repeated. It only starts in debug. I was able to re-create the issue lol. I'm using a fresh 22.04.4 with Python 3.10 and it won't start after install for the very first time: But then, when I manually run the gunicorn command, it just works.. If you're wondering, is the command baked in .wgd.sh at line 240, but without all the logs THEN! I removed the folder, and re-did everything, and it just works now LMAO.. idk why is causing this.. need more investigation Hi @Johnnykson, I just pushed some new changes to the branch v4, not v4.0.beta1 and I think I fixed the issue. I think it was due to the bash script exit before Gunicorn can actually spawn the workers. I've adjusted the command and added checks to see if Gunicorn is spawned, then end the script. Feel free to test it on your side and let me know :) Hi @Johnnykson, I just pushed some new changes to the branch v4, not v4.0.beta1 and I think I fixed the issue. I think it was due to the bash script exit before Gunicorn can actually spawn the workers. I've adjusted the command and added checks to see if Gunicorn is spawned, then end the script. Feel free to test it on your side and let me know :) hangs without continuing but if I run gunicorn manually, the web panel starts working and I see two processes, but the terminal still displays one Background Thread #1 Started That's weird... I tried multiple times with a fresh 22.04.4 and it works after the update.. can you try to git clone the branch and see if it works? That's weird... I tried multiple times with a fresh 22.04.4 and it works after the update.. can you try to git clone the branch and see if it works? Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-117-generic x86_64) What I did: reinstalled OS apt update and apt upgrade install WireGuard and WireGuard-Tools cloned the branch with the already fixed one wgd.sh and install WGDashboard helo I'm back with some good news, seems like it was due to the actual command and not parsing it right when bash run the script.. It shouldn't but I've moved all the gunicorn configuration into the gunicorn.conf.py. If you have time, would you be able to try on your side plz? U can use this command to save ur time ;) sudo apt-get update && apt install wireguard-tools && git clone -b v4 https://github.com/donaldzou/WGDashboard.git && cd ./WGDashboard/src && chmod +x ./wgd.sh && ./wgd.sh install A complete success start should looks like this. I did the whole testing process in the "demo mode" of Ubuntu's ISO Everything worked, thanks! Nah should be me thanking you :)
GITHUB_ARCHIVE