Recently a client came to us to see if we could help them automate their RFP distribution system. Currently the client has an employee manually check several websites for RFPs and alert the appropriate business vertical when a relevant RFP is found. The current system requires manual data scraping, meaning the process is slow and results in RFPs being missed. For the proof of concept phase with the client, we decided to build a machine learning model to classify the RFPs correctly and provide a way to automate the routing of the RFPs. The client wanted to break the project into stages so once the initial Proof of Concept was successful, other parts required to automate the whole process would receive the go-ahead. If you would like a proof of concept, visit our Business Analytics page for information.
This week I had a chance to work with a client who wanted to start their journey into Business Continuity Planning using the Azure Site Recovery tool. This was their first-time recovery planning, so they wanted to start with a few non-production machines. They could watch for any performance impacts to the machine and end with a failover test. During the experience, I ran into some issues that I thought would be helpful for others.
An organization I worked with owned a site in New Orleans. Our disaster recovery plan was simple; It required that the General Manager transport the most recent backup tapes to a designated accessible sister site where IT would redeploy the site's data and applications. When Katrina hit, our disaster recovery plan worked flawlessly and we recovered within the prescribed RTO and RPO. However, the 37 employees that used that data to generate revenue for the organization were scattered around the country unable to conduct business leaving our customers with an inferior product. What good is a disaster recovery plan without an associated solid business continuity plan?
4 Solutions to the Azure talent gap
Business transformation and optimization are major drivers behind cloud adoption today. Azure, Microsoft's public cloud, is the fastest growing public cloud platform in the world. Azure has emerged as a leader in the public cloud space given its emphasis on innovation, enterprise scalability, and security.
Last time (Fixing OMS Workspaces) we looked a way to repair or distribute OMS Workspace settings using Powershell. Wouldn't it be nice if we could leverage SCOM's access to individual machines to be able to keep them attached to the workspaces? If we could pull this off, we could minimize the amount of time we're blind to each server in our environment. Why use SCOM to push OMS settings?
We've seen some interesting behaviors in Azure with the Healthservice Agent which provides the connection to SCOM in our hosted environment and OMS in Azure. The management groups registered with the service (Healthservice - Microsoft Management Agent - MMA for short) seem to disappear every once in a while. We suspect it happens during updates to the agent (extension if you're deploying from the Azure portal).