The SAS Jedi Mark Jordan presents his SAS Global Forum 2018 paper "Working with Big Data in SAS"
The post Jedi @ SASGF: Working with Big Data in SAS appeared first on SAS Learning Post.
Read more »
The SAS Jedi Mark Jordan presents his SAS Global Forum 2018 paper "Working with Big Data in SAS"
The post Jedi @ SASGF: Working with Big Data in SAS appeared first on SAS Learning Post.
Read more »
Our colleagues at the SAS office in Korea recently had the opportunity to interview two customers from KT, one of the biggest telecommunications companies in Korea, about getting SAS certified. Sung-chul Hwang and Gyu-seob Lee both have four SAS certifications – Base Programmer, Advanced Programmer, Statistical Business Analyst and Predictive
The post Read more »
I am a data source, and so are you. It begins in the morning when we grab a cup of coffee at our local coffee shop and tap the screen of our tablet to browse the day’s headlines. Just in this simple routine, data can be used by companies to
The post Read more »
We previously looked at SAS Grid Manager for Hadoop, which brings workload management, accelerated processing, and scheduling to a Hadoop environment. This was introduced with the m3 maintenance release of SAS v9.4. M3 also introduced support for using...
Read more »
I've recently written about how much new functionality is getting released by SAS on an almost monthly basis without much fanfare, and I've also written about how Hadoop is becoming a new "operating system" and we should expect to see Grid and LASR run...
Read more »
I'm gearing up to teach the next "DS2 Programming Essentials with Hadoop" class, and thinking about Warp Speed DATA Steps with DS2 where I first demonstrated parallel processing using threads in base SAS. But how about DATA step processing at maximum warp? For that, we'll need a massively parallel processing
The post Read more »
I remember the first time I was faced with the challenge of parallelizing a DATA step process. It was 2001 and SAS V8.1 was shiny and new. We were processing very large data sets, and the computations performed on each record were quite complex. The processing was crawling along on
The post Read more »