Feed aggregator

Materialized view log , logminer

Tom Kyte - 1 hour 37 min ago
Hello Tom, would you explain a little which way is better to check on delta of information to populate DWh on schema? Materialized view log or LOGMNR?
Categories: DBA Blogs

Oracle 12c Performance issue after upgrading database from 10.2.0.5.0 to 12.1.0.2.0

Tom Kyte - 1 hour 37 min ago
Hello, One of our client has upgraded Test environment Oracle database from 10.2.0.5.0 to 12.1.0.2.0 with Production data. After upgrading the database, we are facing performance issue in few processes which are taking huge time to complete com...
Categories: DBA Blogs

EXP: how to include the "CREATE USER" statement without perform a full export?

Tom Kyte - 1 hour 37 min ago
Hello. I need to export an oracle schema using exp, included the "CREATE USER" statement. Now, for do that the only think to do that I know is perform a full export of the database. The problem is that in my database there are a lot of users and th...
Categories: DBA Blogs

rman question

Tom Kyte - 1 hour 37 min ago
thomas, happy holiday. I have question(s) related to rman 1. what is the difference between - to back up the current control file and to backup up control file copy(backup controlfile copy) 2. when using either of above command tag and ...
Categories: DBA Blogs

Oracle VM Server: How to Upgrade Oracle Manager

Dietrich Schroff - Thu, 2019-02-21 14:47
Because of my connection problem to ovmcli via

ssh -l admin@localhost -p 10000
i decided to upgrade my OVM Manager.

After downloading OVM Manager 0.3.4.6 i mounted the ISO image:
mount /dev/sr0 /mnt
cd /mnt
[root@oraVMManager mnt]# ls -l
insgesamt 149832
drwxr-xr-x. 7 root root      8192 18. Nov 21:00 components
-r-xr-x---. 1 root root     11556 18. Nov 20:59 createOracle.sh
-rw-r--r--. 1 root root       230 18. Nov 20:59 oracle-validated.params
-r-xr-x---. 1 root root 149109589 18. Nov 21:00 ovmm-installer.bsx
-rw-r--r--. 1 root root   4293118 18. Nov 20:55 OvmSDK_3.4.6.2105.zip
-r-xr-x---. 1 root root      1919 18. Nov 20:59 runInstaller.sh
-rw-r--r--. 1 root root       372 18. Nov 20:59 sample.yml
-r--r--r--. 1 root root      1596 18. Nov 21:00 TRANS.TBL
The upgrade procedure can be found here (Oracle Documentation). So let's start:
# ./runInstaller.sh --installtype Upgrade

Oracle VM Manager Release 3.4.6 Installer

Oracle VM Manager Installer log file:
/var/log/ovmm/ovm-manager-3-install-2019-01-25-051107.log

Verifying upgrading prerequisites ...
*** WARNING: Ensure that each Oracle VM Server for x86 has at least 200MB of available space for the /boot partition and 3GB of available space for the / partition.
*** WARNING: Recommended memory for the Oracle VM Manager server installation using Local MySql DB is 7680 MB RAM

Starting Upgrade ...

Reading database parameters from config ...

==========================
Typically the current Oracle VM Manager database password will be the same as the Oracle VM Manager application password.

==========================
Database Repository
==========================
Please enter the current Oracle VM Manager database password for user ovs:

Oracle VM Manager application
=============================
Please enter the current Oracle VM Manager application password for user admin:

Oracle Weblogic Server 12c
==========================
Please enter the current password for the WebLogic domain administrator:

Please enter your fully qualified domain name, e.g. ovs123.us.oracle.com, (or IP address) of your management server for SSL certification generation 192.168.178.37 [oraVMManager.fritz.box]: 
Successfully verified password for user root
Successfully verified password for user appfw

Verifying configuration ...
Verifying 3.4.4 meets the minimum version for upgrade ...

Upgrading from version 3.4.4.1709 to version 3.4.6.2105

Start upgrading Oracle VM Manager:
   1: Continue
   2: Abort

   Select Number (1-2): 1

Running full database backup ...
Successfully backed up database to /u01/app/oracle/mysql/dbbackup/3.4.4_preUpgradeBackup-20190125_051150
Running ovm_preUpgrade script, please be patient this may take a long time ...
Exporting weblogic embedded LDAP users
Stopping service on Linux: ovmcli ...
Stopping service on Linux: ovmm ...
Exporting core database, please be patient this may take a long time  ...
NOTE: To monitor progress, open another terminal session and run: tail -f /var/log/ovmm/ovm-manager-3-install-2019-01-25-051107.log

Product component : Java in '/u01/app/oracle/java'
Java is installed ...

Removing Java installation ...


Installing Java ...


DB component : MySQL RPM package

MySQL RPM package installed by OVMM was found...

Removing MySQL RPM package installation ...


Installing Database Software...
Retrieving MySQL Database 5.6 ...
Unzipping MySQL RPM File ...
Installing MySQL 5.6 RPM package ...
Configuring MySQL Database 5.6 ...
Installing MySQL backup RPM package ...

Product component : Oracle VM Manager in '/u01/app/oracle/ovm-manager-3/'
Oracle VM Manager is installed ...
Removing Oracle VM Manager installation ...

Product component : Oracle WebLogic Server in '/u01/app/oracle/Middleware/'
Oracle WebLogic Server is installed

Removing Oracle WebLogic Server installation ...
Service ovmm is deleted.
Service ovmcli is deleted.


Retrieving Oracle WebLogic Server 12c and ADF ...
Installing Oracle WebLogic Server 12c and ADF ...
Applying patches to Weblogic ...
Applying patch to ADF ...


Installing Oracle VM Manager Core ...
Retrieving Oracle VM Manager Application ...
Extracting Oracle VM Manager Application ...

Retrieving Oracle VM Manager Upgrade tool ...
Extracting Oracle VM Manager Upgrade tool ...
Installing Oracle VM Manager Upgrade tool ...
Installing Oracle VM Manager WLST Scripts ...


Dropping the old database user 'appfw' ...
Dropping the old database 'appfw' ...
Creating new domain...
Creating new domain done.
Upgrading core database, please be patient this may take a long time ...
NOTE: To monitor progress, open another terminal session and run: tail -f /var/log/ovmm/ovm-manager-3-install-2019-01-25-051107.log
Starting restore domain's SSL configuration and create appfw database tables.
Restore domain's SSL configuration and create appfw database tables done.
AdminServer started.
Importing weblogic embedded LDAP users

Retrieving Oracle VM Manager CLI tool ...
Extracting Oracle VM Manager CLI tool...
Installing Oracle VM Manager CLI tool ...



Retrieving Oracle VM Manager Shell & API ...
Extracting Oracle VM Manager Shell & API ...
Installing Oracle VM Manager Shell & API ...

Retrieving Oracle VM Manager Wsh tool ...
Extracting Oracle VM Manager Wsh tool ...
Installing Oracle VM Manager Wsh tool ...

Retrieving Oracle VM Manager Tools ...
Extracting Oracle VM Manager Tools ...
Installing Oracle VM Manager Tools ...

Retrieving ovmcore-console ...
The ovmcore-console RPM package is latest, needn't to upgrade ...
Copying Oracle VM Manager shell to '/usr/bin/ovm_shell.sh' ...
Installing ovm_admin.sh in '/u01/app/oracle/ovm-manager-3/bin' ...
Installing ovm_upgrade.sh in '/u01/app/oracle/ovm-manager-3/bin' ...



Enabling Oracle VM Manager service ...
Shutting down Oracle VM Manager instance ...
Starting Oracle VM Manager instance ...

Please wait while WebLogic configures the applications...
Trying to connect to core via ovmwsh (attempt 1 of 20) ...
Trying to connect to core via ovm_shell (attempt 1 of 5)...

Installation Summary
--------------------
Database configuration:
  Database type               : MySQL
  Database host name          : localhost
  Database name               : ovs
  Database listener port      : 49500
  Database user               : ovs

Weblogic Server configuration:
  Administration username     : weblogic

Oracle VM Manager configuration:
  Username                    : admin
  Core management port        : 54321
  UUID                        : 0004fb000001000019f1074e05c43aa1


Passwords:
There are no default passwords for any users. The passwords to use for Oracle VM Manager, Database, and Oracle WebLogic Server have been set by you during this installation. In the case of a default install, all passwords are the same.

Oracle VM Manager UI:
  https://oraVMManager.fritz.box:7002/ovm/console
Log in with the user 'admin', and the password you set during the installation.

For more information about Oracle Virtualization, please visit:
  http://www.oracle.com/virtualization/

3.2.10/3.2.11 Oracle VM x86 Servers and SPARC agent 3.3.1 managed Servers are no longer supported in Oracle VM Manager 3.4. Please upgrade your Server to a more current version for full support
For instructions, see the Oracle VM 3.4 Installation and Upgrade guide.

Oracle VM Manager upgrade complete.

Please remove configuration file /tmp/ovm_configA6Dpxd.

And the GUI shows after that via "help":




Where is My New Optional Default Tile?

Jim Marion - Thu, 2019-02-21 14:14
Navigation is critical to any business application. Classic used breadcrumbs for navigation. As I'm sure you noticed, Fluid is different, using Tiles and Homepages as the starting point for application navigation.
"In Fluid, tiles and homepages represent the primary navigation model, replacing Classic's breadcrumb menu."In Classic, breadcrumb navigation is managed by administrators. It is fixed, not variable, not personalizable. Users cannot personalize Classic navigation (other than creating favorites). Did I say Fluid is different? Yes. Fluid gives users significant control over their navigational view by allowing them to personalize tiles and homepages. This can cause significant problems, with users removing tiles that represent critical business functions. There are a few solutions for this problem (disable personalization, mark tiles as required, etc, see Section 4 of Simon's blog post for ideas). What I want to focus on is confusion regarding optional default tiles, where an optional default tile doesn't default onto a homepage. Here is the scenario:
  • A homepage already exists
  • As an administrator, you configure a new tile as Optional Default



After configuring the homepage, all users that have NOT personalized will see the tile. Put another way, any user that has personalized the homepage will not see the new tile (and a simple accidental drag and drop will result in a personalization). Here is what users that personalize will see:


If it is optional default, what happened to the default part? When users personalize their homepages, PeopleSoft clones the current state of the homepage into a user table. Let's say Tom and Jill both personalize their home pages. Tom will now have a personalized copy of the default configuration and Jill will have an entirely different personalized copy.



Administrators will continue to insert optional default content into homepages, but Tom and Jill will not see those optional default tiles. Tom and Jill's homepages are now detached from the source. We can push optional default tiles into Tom's and Jill's copies by using the Tile Publish button available to each homepage content reference (in the portal registry). This App Engine program inserts a row for each optional default tile into each user's copy of the homepage metadata.

Pretty clear and straight forward so far? OK, let's make it more complicated. Let's say an administrator adds a new optional default tile to the default homepage described above and presses the Publish Tile button. After the App Engine runs, the administrator notices Tom sees the tile, but Jill does not. What went wrong? If Jill doesn't have security access to the tile's target, Jill won't see the new tile. Let's say Jill is supposed to have security access so we update permissions and roles. We check Jill's homepage again. Does Jill see the tile? No. Why not? When we published the tile, Jill did not have security access so PeopleSoft didn't insert a row into Jill's personalization metadata. How can we make this tile appear for Jill? We could publish again. If we recognize and resolve the security issue immediately after publishing, this may be reasonable.

Let's play out this scenario a little differently. Some time has passed since we published. Tom has seen and removed the new tile from his homepage. One day Tom is at the water cooler talking about this annoying new tile that just appeared one day so he removed it. Jill overhears Tom and logs in to look for this annoying tile. After some searching, however, she doesn't see it on her homepage. She calls the help desk to find out why she doesn't have access to the annoying tile (that she will probably remove after seeing it). This is when you discover the security issue and make the tile available to Jill. For Jill to see this tile as a default, however, you will need to republish the tile. When you republish the tile, what will happen to Tom's homepage? Yes, you guessed it. Tom will see the tile appear again and will likely call the help desk to complain about the annoying tile that just reappeared.

What's the solution? At this time there is no delivered, recommended solution. The App Engine is very short, containing a couple of SQL statements. Using it as a guide, it is trivial to write a one-off metadata insert for Jill and all others affected by the security change without affecting Tom. When writing SQL inserts into PeopleTools tables, however, we must consider cache, version increments, and many other risk factors (I probably would not do this). I would say it is safer to annoy Tom.

--

Jim' is the Principal PeopleTools instructor at JSMPROS. Take your PeopleTools skills to the next level by scheduling PeopleTools training with us today!

a sql code to remove all the special characters from a particular column of a table

Tom Kyte - Thu, 2019-02-21 12:46
a sql code to remove all the special characters from a particular column of a table . for example : iNPUT-ABC -D.E.F OUTPUT ABC DEF AND IF THERE IS TWO NAMES LIKE ABC PRIVATE LTD ONE SPACE SHOULD BE MAINTAINED BETWEEN 2 WORDS.
Categories: DBA Blogs

DISTINCT vs UNION

Tom Kyte - Thu, 2019-02-21 12:46
Hello Tom, my test case create table xxx as select * from dba_tables; insert into xxx select * from dba_tables; I tried 2 queries 1* select distinct * from xxx 2359 rows selected. Execution Plan -----------------------------------------...
Categories: DBA Blogs

Guidance for Providing Access to the Oracle E-Business Suite Database for Extensions and ...

Steven Chan - Thu, 2019-02-21 10:05

Oracle E-Business Suite is often extended, customized and integrated with third-party products. We highly recommend that you follow our published guidance for performing extensions, customizations and third-party integrations.

In addition to these guidelines, you should also follow our equivalent recommendations for granting access to Oracle E-Business Suite database objects. Our guidance for accessing an EBS database takes a least-privileges approach to providing access to database objects. The database access recommendations complement our guidance for deploying customizations, extensions and third-party integrations.

References

Related Articles

 

 

 

Categories: APPS Blogs

Spark Streaming and Kafka - Creating a New Kafka Connector

Rittman Mead Consulting - Thu, 2019-02-21 08:39
More Kafka and Spark, please!

Hello, world!

Having joined Rittman Mead more than 6 years ago, the time has come for my first blog post. Let me start by standing on the shoulders of blogging giants, revisiting Robin's old blog post Getting Started with Spark Streaming, Python, and Kafka.

The blog post was very popular, touching on the subjects of Big Data and Data Streaming. To put my own twist on it, I decided to:

  • not use Twitter as my data source, because there surely must be other interesting data sources out there,
  • use Scala, my favourite programming language, to see how different the experience is from using Python.
Why Scala?

Scala is admittedly more challenging to master than Python. However, because Scala compiles into Java bytecode, it can be used pretty much anywhere where Java is being used. And Java is being used everywhere. Python is arguably even more widely used than Java, however it remains a dynamically typed scripting language that is easy to write in but can be hard to debug.

Is there a case for using Scala instead of Python for the job? Both Spark and Kafka were written in Scala (and Java), hence they should get on like a house on fire, I thought. Well, we are about to find out.

My data source: OpenWeatherMap

When it comes to finding sample data sources for data analysis, the selection out there is amazing. At the time of this writing, Kaggle offers freely available 14,470 datasets, many of them in easy-to-digest formats like CSV and JSON. However, when it comes to real-time sample data streams, the selection is quite limited. Twitter is usually the go-to choice - easily accessible and well documented. Too bad I decided not to use Twitter as my source.

Another alternative is the Wikipedia Recent changes stream. Although in the stream schema there are a few values that would be interesting to analyse, overall this stream is more boring than it sounds - the text changes themselves are not included.

Fortunately, I came across the OpenWeatherMap real-time weather data website. They have a free API tier, which is limited to 1 request per second, which is quite enough for tracking changes in weather. Their different API schemas return plenty of numeric and textual data, all interesting for analysis. The APIs work in a very standard way - first you apply for an API key. With the key you can query the API with a simple HTTP GET request (Apply for your own API key instead of using the sample one - it is easy.):

This request

https://samples.openweathermap.org/data/2.5/weather?q=London,uk&appid=b6907d289e10d714a6e88b30761fae22

gives the following result:

{
  "coord": {"lon":-0.13,"lat":51.51},
  "weather":[
    {"id":300,"main":"Drizzle","description":"light intensity drizzle","icon":"09d"}
  ],
  "base":"stations",
  "main": {"temp":280.32,"pressure":1012,"humidity":81,"temp_min":279.15,"temp_max":281.15},
  "visibility":10000,
  "wind": {"speed":4.1,"deg":80},
  "clouds": {"all":90},
  "dt":1485789600,
  "sys": {"type":1,"id":5091,"message":0.0103,"country":"GB","sunrise":1485762037,"sunset":1485794875},
  "id":2643743,
  "name":"London",
  "cod":200
}
Getting data into Kafka - considering the options

There are several options for getting your data into a Kafka topic. If the data will be produced by your application, you should use the Kafka Producer Java API. You can also develop Kafka Producers in .Net (usually C#), C, C++, Python, Go. The Java API can be used by any programming language that compiles to Java bytecode, including Scala. Moreover, there are Scala wrappers for the Java API: skafka by Evolution Gaming and Scala Kafka Client by cakesolutions.

OpenWeatherMap is not my application and what I need is integration between its API and Kafka. I could cheat and implement a program that would consume OpenWeatherMap's records and produce records for Kafka. The right way of doing that however is by using Kafka Source connectors, for which there is an API: the Connect API. Unlike the Producers, which can be written in many programming languages, for the Connectors I could only find a Java API. I could not find any nice Scala wrappers for it. On the upside, the Confluent's Connector Developer Guide is excellent, rich in detail though not quite a step-by-step cookbook.

However, before we decide to develop our own Kafka connector, we must check for existing connectors. The first place to go is Confluent Hub. There are quite a few connectors there, complete with installation instructions, ranging from connectors for particular environments like Salesforce, SAP, IRC, Twitter to ones integrating with databases like MS SQL, Cassandra. There is also a connector for HDFS and a generic JDBC connector. Is there one for HTTP integration? Looks like we are in luck: there is one! However, this connector turns out to be a Sink connector.

Ah, yes, I should have mentioned - there are two flavours of Kafka Connectors: the Kafka-inbound are called Source Connectors and the Kafka-outbound are Sink Connectors. And the HTTP connector in Confluent Hub is Sink only.

Googling for Kafka HTTP Source Connectors gives few interesting results. The best I could find was Pegerto's Kafka Connect HTTP Source Connector. Contrary to what the repository name suggests, the implementation is quite domain-specific, for extracting Stock prices from particular web sites and has very little error handling. Searching Scaladex for 'Kafka connector' does yield quite a few results but nothing for http. However, there I found Agoda's nice and simple Source JDBC connector (though for a very old version of Kafka), written in Scala. (Do not use this connector for JDBC sources, instead use the one by Confluent.) I can use this as an example to implement my own.

Creating a custom Kafka Source Connector

The best place to start when implementing your own Source Connector is the Confluent Connector Development Guide. The guide uses JDBC as an example. Our source is a HTTP API so early on we must establish if our data source is partitioned, do we need to manage offsets for it and what is the schema going to look like.

Partitions

Is our data source partitioned? A partition is a division of source records that usually depends on the source medium. For example, if we are reading our data from CSV files, we can consider the different CSV files to be a natural partition of our source data. Another example of partitioning could be database tables. But in both cases the best partitioning approach depends on the data being gathered and its usage. In our case, there is only one API URL and we are only ever requesting current data. If we were to query weather data for different cities, that would be a very good partitioning - by city. Partitioning would allow us to parallelise the Connector data gathering - each partition would be processed by a separate task. To make my life easier, I am going to have only one partition.

Offsets

Offsets are for keeping track of the records already read and the records yet to be read. An example of that is reading the data from a file that is continuously being appended - there can be rows already inserted into a Kafka topic and we do not
want to process them again to avoid duplication. Why would that be a problem? Surely, when going through a source file row by row, we know which row we are looking at. Anything above the current row is processed, anything below - new records. Unfortunately, most of the time it is not as simple as that: first of all Kafka supports concurrency, meaning there can be more than one Task busy processing Source records. Another consideration is resilience - if a Kafka Task process fails,
another process will be started up to continue the job. This can be an important consideration when developing a Kafka Source Connector.

Is it relevant for our HTTP API connector? We are only ever requesting current weather data. If our process fails, we may miss some time periods but we cannot recover then later on. Offset management is not required for our simple connector.

So that is Partitions and Offsets dealt with. Can we make our lives just a bit more difficult? Fortunately, we can. We can create a custom Schema and then parse the source data to populate a Schema-based Structure. But we will come to that later.
First let us establish the Framework for our Source Connector.

Source Connector - the Framework

The starting point for our Source Connector are two Java API classes: SourceConnector and SourceTask. We will put them into separate .scala source files but they are shown here together:

import org.apache.kafka.connect.source.{SourceConnector, SourceTask}

class HttpSourceConnector extends SourceConnector {...}
class HttpSourceTask extends SourceTask {...}

These two classes will be the basis for our Source Connector implementation:

  • HttpSourceConnector represents the Connector process management. Each Connector process will have only one SourceConnector instance.
  • HttpSourceTask represents the Kafka task doing the actual data integration work. There can be one or many Tasks active for an active SourceConnector instance.

We will have some additional classes for config and for HTTP access.
But first let us look at each of the two classes in more detail.

SourceConnector class

SourceConnector is an abstract class that defines an interface that our HttpSourceConnector needs to adhere to. The first function we need to override is config:

  private val configDef: ConfigDef =
      new ConfigDef()
          .define(HttpSourceConnectorConstants.HTTP_URL_CONFIG, Type.STRING, Importance.HIGH, "Web API Access URL")
          .define(HttpSourceConnectorConstants.API_KEY_CONFIG, Type.STRING, Importance.HIGH, "Web API Access Key")
          .define(HttpSourceConnectorConstants.API_PARAMS_CONFIG, Type.STRING, Importance.HIGH, "Web API additional config parameters")
          .define(HttpSourceConnectorConstants.SERVICE_CONFIG, Type.STRING, Importance.HIGH, "Kafka Service name")
          .define(HttpSourceConnectorConstants.TOPIC_CONFIG, Type.STRING, Importance.HIGH, "Kafka Topic name")
          .define(HttpSourceConnectorConstants.POLL_INTERVAL_MS_CONFIG, Type.STRING, Importance.HIGH, "Polling interval in milliseconds")
          .define(HttpSourceConnectorConstants.TASKS_MAX_CONFIG, Type.INT, Importance.HIGH, "Kafka Connector Max Tasks")
          .define(HttpSourceConnectorConstants.CONNECTOR_CLASS, Type.STRING, Importance.HIGH, "Kafka Connector Class Name (full class path)")

  override def config: ConfigDef = configDef

This is validation for all the required configuration parameters. We also provide a description for each configuration parameter, that will be shown in the missing configuration error message.

HttpSourceConnectorConstants is an object where config parameter names are defined - these configuration parameters must be provided in the connector configuration file:

object HttpSourceConnectorConstants {
  val HTTP_URL_CONFIG               = "http.url"
  val API_KEY_CONFIG                = "http.api.key"
  val API_PARAMS_CONFIG             = "http.api.params"
  val SERVICE_CONFIG                = "service.name"
  val TOPIC_CONFIG                  = "topic"
  val TASKS_MAX_CONFIG              = "tasks.max"
  val CONNECTOR_CLASS               = "connector.class"

  val POLL_INTERVAL_MS_CONFIG       = "poll.interval.ms"
  val POLL_INTERVAL_MS_DEFAULT      = "5000"
}

Another simple function to be overridden is taskClass - for the SourceConnector class to know its corresponding SourceTask class.

  override def taskClass(): Class[_ <: SourceTask] = classOf[HttpSourceTask]

The last two functions to be overridden here are start and stop. These are called upon the creation and termination of a SourceConnector instance (not Task instance). JavaMap here is an alias for java.util.Map - a Java Map, which is not to be confused with the native Scala Map - that cannot be used here. (If you are a Python developer, a Map in Java/Scala is similar to the Python dictionary, but strongly typed.) The interface requires Java data structures, but that is fine - we can convert them from one to another. By far the biggest problem here is the assignment of the connectorConfig variable - we cannot have a functional programming friendly immutable value here. The variable is defined at the class level

  private var connectorConfig: HttpSourceConnectorConfig = _

and is set in the start function and then referred to in the taskConfigs function further down. This does not look pretty in Scala. Hopefully somebody will write a Scala wrapper for this interface.

Because there is no logout/shutdown/sign-out required for the HTTP API, the stop function just writes a log message.

  override def start(connectorProperties: JavaMap[String, String]): Unit = {
    Try (new HttpSourceConnectorConfig(connectorProperties.asScala.toMap)) match {
      case Success(cfg) => connectorConfig = cfg
      case Failure(err) => connectorLogger.error(s"Could not start Kafka Source Connector ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
    }
  }

  override def stop(): Unit = {
    connectorLogger.info(s"Stopping Kafka Source Connector ${this.getClass.getName}.")
  }

HttpSourceConnectorConfig is a thin wrapper class for the configuration.

We are almost done here. The last function to be overridden is taskConfigs.
This function is in charge of producing (potentially different) configurations for different Source Tasks. In our case, there is no reason for the Source Task configurations to differ. In fact, our HTTP API will benefit little from parallelism, so, to keep things simple, we can assume the number of tasks always to be 1.

  override def taskConfigs(maxTasks: Int): JavaList[JavaMap[String, String]] = List(connectorConfig.connectorProperties.asJava).asJava

The name of the taskConfigs function was changed in the Kafka version 2.1.0 - please consider that when using this code for older Kafka versions.

Source Task class

In a similar manner to the Source Connector class, we implement the Source Task abstract class. It is only slightly more complex than the Connector class.

Just like for the Connector, there are start and stop functions to be overridden for the Task.

Remember the taskConfigs function from above? This is where task configuration ends up - it is passed to the Task's start function. Also, similarly to the Connector's start function, we parse the connection properties with HttpSourceTaskConfig, which is the same as HttpSourceConnectorConfig - configuration for Connector and Task in our case is the same.

We also set up the Http service that we are going to use in the poll function - we create an instance of the WeatherHttpService class. (Please note that start is executed only once, upon the creation of the task and not every time a record is polled from the data source.)

  override def start(connectorProperties: JavaMap[String, String]): Unit = {
    Try(new HttpSourceTaskConfig(connectorProperties.asScala.toMap)) match {
      case Success(cfg) => taskConfig = cfg
      case Failure(err) => taskLogger.error(s"Could not start Task ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
    }

    val apiHttpUrl: String = taskConfig.getApiHttpUrl
    val apiKey: String = taskConfig.getApiKey
    val apiParams: Map[String, String] = taskConfig.getApiParams

    val pollInterval: Long = taskConfig.getPollInterval

    taskLogger.info(s"Setting up an HTTP service for ${apiHttpUrl}...")
    Try( new WeatherHttpService(taskConfig.getTopic, taskConfig.getService, apiHttpUrl, apiKey, apiParams) ) match {
      case Success(service) =>  sourceService = service
      case Failure(error) =>    taskLogger.error(s"Could not establish an HTTP service to ${apiHttpUrl}")
                                throw error
    }

    taskLogger.info(s"Starting to fetch from ${apiHttpUrl} each ${pollInterval}ms...")
    running = new JavaBoolean(true)
  }

The Task also has the stop function. But, just like for the Connector, it does not do much, because there is no need to sign out from an HTTP API session.

Now let us see how we get the data from our HTTP API - by overriding the poll function.

The fetchRecords function uses the sourceService HTTP service initialised in the start function. sourceService's sourceRecords function requests data from the HTTP API.

  override def poll(): JavaList[SourceRecord] = this.synchronized { if(running.get) fetchRecords else null }

  private def fetchRecords: JavaList[SourceRecord] = {
    taskLogger.debug("Polling new data...")

    val pollInterval = taskConfig.getPollInterval
    val startTime    = System.currentTimeMillis

    val fetchedRecords: Seq[SourceRecord] = Try(sourceService.sourceRecords) match {
      case Success(records)                    => if(records.isEmpty) taskLogger.info(s"No data from ${taskConfig.getService}")
                                                  else taskLogger.info(s"Got ${records.size} results for ${taskConfig.getService}")
                                                  records

      case Failure(error: Throwable)           => taskLogger.error(s"Failed to fetch data for ${taskConfig.getService}: ", error)
                                                  Seq.empty[SourceRecord]
    }

    val endTime     = System.currentTimeMillis
    val elapsedTime = endTime - startTime

    if(elapsedTime < pollInterval) Thread.sleep(pollInterval - elapsedTime)

    fetchedRecords.asJava
  }

Phew - that is the interface implementation done. Now for the fun part...

Requesting data from OpenWeatherMap's API

The fun part is rather straightforward. We use the scalaj.http library to issue a very simple HTTP request and get a response.

Our WeatherHttpService implementation will have two functions:

  • httpServiceResponse that will format the request and get data from the API
  • sourceRecords that will parse the Schema and wrap the result within the Kafka SourceRecord class.

Please note that error handling takes place in the fetchRecords function above.

    override def sourceRecords: Seq[SourceRecord] = {
        val weatherResult: HttpResponse[String] = httpServiceResponse
        logger.info(s"Http return code: ${weatherResult.code}")
        val record: Struct = schemaParser.output(weatherResult.body)

        List(
            new SourceRecord(
                Map(HttpSourceConnectorConstants.SERVICE_CONFIG -> serviceName).asJava, // partition
                Map("offset" -> "n/a").asJava, // offset
                topic,
                schemaParser.schema,
                record
            )
        )
    }

    private def httpServiceResponse: HttpResponse[String] = {

        @tailrec
        def addRequestParam(accu: HttpRequest, paramsToAdd: List[(String, String)]): HttpRequest = paramsToAdd match {
            case (paramKey,paramVal) :: rest => addRequestParam(accu.param(paramKey, paramVal), rest)
            case Nil => accu
        }

        val baseRequest = Http(apiBaseUrl).param("APPID",apiKey)
        val request = addRequestParam(baseRequest, apiParams.toList)

        request.asString
    }
Parsing the Schema

Now the last piece of the puzzle - our Schema parsing class.

The short version of it, which would do just fine, is just 2 lines of class (actually - object) body:

object StringSchemaParser extends KafkaSchemaParser[String, String] {
    override val schema: Schema = Schema.STRING_SCHEMA
    override def output(inputString: String) = inputString
}

Here we say we just want to use the pre-defined STRING_SCHEMA value as our schema definition. And pass inputString straight to the output, without any alteration.

Looks too easy, does it not? Schema parsing could be a big part of Source Connector implementation. Let us implement a proper schema parser. Make sure you read the Confluent Developer Guide first.

Our schema parser will be encapsulated into the WeatherSchemaParser object. KafkaSchemaParser is a trait with two type parameters - inbound and outbound data type. This indicates that the Parser receives data in String format and the result is a Kafka's Struct value.

object WeatherSchemaParser extends KafkaSchemaParser[String, Struct]

The first step is to create a schema value with the SchemaBuilder. Our schema is rather large, therefore I will skip most fields. The field names given are a reflection of the hierarchy structure in the source JSON. What we are aiming for is a flat, table-like structure - a likely Schema creation scenario.

For JSON parsing we will be using the Scala Circle library, which in turn is based on the Scala Cats library. (If you are a Python developer, you will see that Scala JSON parsing is a bit more involved (this might be an understatement), but, on the flipside, you can be sure about the result you are getting out of it.)

    override val schema: Schema = SchemaBuilder.struct().name("weatherSchema")
        .field("coord-lon", Schema.FLOAT64_SCHEMA)
        .field("coord-lat", Schema.FLOAT64_SCHEMA)

        .field("weather-id", Schema.FLOAT64_SCHEMA)
        .field("weather-main", Schema.STRING_SCHEMA)
        .field("weather-description", Schema.STRING_SCHEMA)
        .field("weather-icon", Schema.STRING_SCHEMA)
        
        // ...
        
        .field("rain", Schema.FLOAT64_SCHEMA)
        
        // ...

Next we define case classes, into which we will be parsing the JSON content.

   case class Coord(lon: Double, lat: Double)
   case class WeatherAtom(id: Double, main: String, description: String, icon: String)

That is easy enough. Please note that the case class attribute names match one-to-one with the attribute names in JSON. However, our Weather JSON schema is rather relaxed when it comes to attribute naming. You can have names like type and 3h, both of which are invalid value names in Scala. What do we do? We give the attributes valid Scala names and then implement a decoder:

    case class Rain(threeHours: Double)
    object Rain {
        implicit val decoder: Decoder[Rain] = Decoder.instance { h =>
            for {
                threeHours <- h.get[Double]("3h")
            } yield Rain(
                threeHours
            )
        }
    }

The rain case class is rather short, with only one attribute. The corresponding JSON name was 3h. We map '3h' to the Scala attribute threeHours.

Not quite as simple as JSON parsing in Python, is it?

In the end, we assemble all sub-case classes into the WeatherSchema case class, representing the whole result JSON.

    case class WeatherSchema(
                                coord: Coord,
                                weather: List[WeatherAtom],
                                base: String,
                                mainVal: Main,
                                visibility: Double,
                                wind: Wind,
                                clouds: Clouds,
                                dt: Double,
                                sys: Sys,
                                id: Double,
                                name: String,
                                cod: Double
                            )

Now, the parsing itself. (Drums, please!)

structInput here is the input JSON in String format. WeatherSchema is the case class we created above. The Circle decode function returns a Scala Either monad, error on the Left(), successful parsing result on the Right() - nice and tidy. And safe.

        val weatherParsed: WeatherSchema = decode[WeatherSchema](structInput) match {
            case Left(error) => {
                logger.error(s"JSON parser error: ${error}")
                emptyWeatherSchema
            }
            case Right(weather) => weather
        }

Now that we have the WeatherSchema object, we can construct our Struct object that will become part of the SourceRecord returned by the sourceRecords function in the WeatherHttpService class. That in turn is called from the HttpSourceTask's poll function that is used to populate the Kafka topic.

        val weatherStruct: Struct = new Struct(schema)
            .put("coord-lon", weatherParsed.coord.lon)
            .put("coord-lat", weatherParsed.coord.lat)

            .put("weather-id", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).id)
            .put("weather-main", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).main)
            .put("weather-description", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).description)
            .put("weather-icon", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).icon)

            // ...

Done!

Considering that Schema parsing in our simple example was optional, creating a Kafka Source Connector for us meant creating a Source Connector class, a Source Task class and a Source Service class.

Creating JAR(s)

JAR creation is described in the Confluent's Connector Development Guide. The guide mentions two options - either all the library dependencies can be added to the target JAR file, a.k.a an 'uber-Jar'. Alternatively, the dependencies can be copied to the target folder. In that case they must all reside in the same folder, with no subfolder structure. For no particular reason, I went with the latter option.

The Developer Guide says it is important not to include the Kafka Connect API libraries there. (Instead they should be added to CLASSPATH.) Please note that for the latest Kafka versions it is advised not to add these custom JARs to CLASSPATH. Instead, we will add them to connectors' plugin.path. But that we will leave for another blog post.

Scala - was it worth using it?

Only if you are a big fan. The code I wrote is very Java-like and it might have been better to write it in Java. However, if somebody writes a Scala wrapper for the Connector interfaces, or, even better, if a Kafka Scala API is released, writing Connectors in Scala would be a very good choice.connector

Categories: BI & Warehousing

[Video 3 of 5] Oracle Cloud: Create VCN, Subnet, Firewall (Security List), IGW, DRG: Step By Step

Online Apps DBA - Thu, 2019-02-21 08:22

You are asked to deploy a three tier highly availability application including Load Balancer on Oracle’s Gen 2 Cloud Then, ✔ How would you define Network, Subnet & Firewalls in Oracle’s Gen2 Cloud? ✔ How do you allow database port but only from Application Tier ? ✔ What are Ingress or Egress Security Rules ? […]

The post [Video 3 of 5] Oracle Cloud: Create VCN, Subnet, Firewall (Security List), IGW, DRG: Step By Step appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Simplest Automation: Use Environment Variables

Michael Dinh - Thu, 2019-02-21 07:25

Copy the last five Goldengate trail files from source to destination.

Here are high level steps:

Copy trail with prefix (aa*) to new destination:
1. export OLD_DIRDAT=/media/patch/dirdat
2. export NEW_DIRDAT=/media/swrepo/dirdat
3. export TRAIL_PREFIX=rt*
5. ls -l $NEW_DIRDAT
6. ls $OLD_DIRDAT/$TRAIL_PREFIX | head -5
7. ls $OLD_DIRDAT/$TRAIL_PREFIX | tail -5
8. cp -fv $(ls $OLD_DIRDAT/$TRAIL_PREFIX | tail -5) $NEW_DIRDAT
9. ls -l $NEW_DIRDAT/*

Copy trail with prefix (ab*) to new destination:
export TRAIL_PREFIX=ab*
Repeat steps 5-9

Oracle Survey Finds Enterprises Ready for Benefits of 5G

Oracle Press Releases - Thu, 2019-02-21 07:00
Press Release
Oracle Survey Finds Enterprises Ready for Benefits of 5G Nearly 80 percent of Companies Expect to Deploy Basic Connectivity Solutions by 2021 to Spawn New Revenue Opportunities and Tap Advantages of IoT and Smart Ecosystems

Redwood Shores, Calif.—Feb 21, 2019

While much of the discussion around 5G has centered on consumer devices, enterprises are looking towards the tremendous impact the technology can have on their ability to serve customers and the bottom line. Not only are most companies (97 percent) aware of the benefits of 5G, but 95 percent are already strategically planning how they will take advantage of this next generation of wireless connectivity to power core business initiatives from new services, to IoT and smart ecosystems.

The Oracle Communications study, “5G Smart Ecosystems Are Transforming the Enterprise – Are You Ready?,” surveyed 265 enterprise IT and business decision makers at medium and large enterprises globally in December, 2018 to find out how businesses are thinking about 5G today and its potential significance moving forward.

“Enterprises clearly want to capitalize on the promise of 5G, however, to be successful, IT and business leaders must avoid thinking of 5G as just another ‘G,’ and should instead consider it as an enabler to the smart ecosystem we have long talked about,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “This means asking the right questions at the outset, and considering how 5G can help enable upcoming solutions, what timeframe should be considered and how will they will procure and use 5G capabilities as part of their business evolution.”

Born in the cloud, 5G will have the ability to enable enterprises to provision or “slice” core pieces of their networks to power mission-critical new offerings and smart ecosystems. This can range from anything to providing the highest speed connections for life-saving 911 services, to enabling autonomous vehicles to communicate with each other quickly; to ensuring IoT devices in smart factories are providing real-time information on the health of machines and assets.

Outside specific initiatives, respondents believe 5G will have a wide-spread impact across their business, including increasing employee productivity (86 percent), reducing costs (84 percent), enhancing customer experience (83 percent), and improving agility (83 percent). Business decision makers are most focused on quality of experience the technology will bring, while IT is concerned with network speed and resiliency.

Unleashing the Promise of 5G Ecosystems in the Enterprise

When it comes to 5G, enterprise are most focused on:

  • Unlocking the potential of IoT: Beyond initial benefits such as speed and quality of experience, 84 percent of respondents feel that 5G networks will be transformative and have a lasting impact on the way their companies do business. Another 73 percent agree the IoT will be revolutionized by 5G networks and 68 percent feel it will be transformative to their customers.
  • Monetizing new services: Eighty-percent expect 5G to generate new revenue streams for their business. Forty-one percent of the respondents would deploy new monetization solutions specifically for 5G services alongside existing systems, while thirty-four percent say they would replace their existing systems with a single, converged solution for all services. Just about one in five (22 percent) said they will utilize and extend existing monetization solutions with 5G.
  • Experience and efficiencies: Eighty-four percent of respondents agree that 5G networks will be transformative and have a lasting impact on the way their companies do business. While business respondents are focused on the quality of experience improvements made possible by 5G, IT respondents care more about the network technologies and the internal efficiencies 5G may enable.
  • Security: While excited about the potential of 5G, both business and IT respondents cited security as a top priority. 51% of respondents ranked security as their highest concern.

Oracle’s survey also explored 5G’s potential role in solutions as varied as live streaming, industrial automation, smart homes and buildings, connected vehicles, immersive gaming, augmented and virtual reality. To learn more, click here to access the report or infographic.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Amy Dalkoff
Hill+Knowlton Strategies
+1.312.255.3078
amy.dalkoff@hkstrategies.com
About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

To learn more about Oracle Communications industry solutions, visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Amy Dalkoff

  • +1.312.255.3078

[BLOG] Oracle EBS (R12) On Cloud (OCI): High Level Steps

Online Apps DBA - Thu, 2019-02-21 04:20

Do you wish to deploy Oracle EBS (R12) on Oracle Cloud Infrastructure (OCI) using EBS Cloud Manager, but are unable to do so because of lack of knowledge of what steps to follow? If yes, then Don’t You Worry!! We’ve got you covered. Visit: https://k21academy.com/ebscloud28 & know about: ✔Background History ✔How To Deploy EBS Cloud […]

The post [BLOG] Oracle EBS (R12) On Cloud (OCI): High Level Steps appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Create a primary database using the backup of a standby database on 12cR2

Yann Neuhaus - Thu, 2019-02-21 02:05

The scope of this blog will be to show how to create a primary role database based on a backup of a standby database on 12cR2.

Step1: We are assuming that an auxiliary instance has been created and started in nomount mode.

rman target /
restore primary controlfile from 'backup_location_directory/control_.bkp';
exit;

By specifying “restore primary” , will modify the flag into the controlfile, and will mount a primary role instance instead of a standby one.

Step2: Once mounted the instance, we will restore the backup of the standby database.

run
{
catalog start with 'backup_location_directory';
restore database;
alter database flashback off;
recover database 
}

If in the pfile used to start the instance, you specified the recovery destination and size parameters it will try to enable the flashback.
The flashback enable , before during the recovery is not allowed, so we will deactivate for the moment.

Step3: Restore/recover completed successfully, we will try to open the database, but got some errors:

alter database open :

ORA-03113: end-of-file on communication channel
Process ID: 2588
Session ID: 1705 Serial number: 5

Step4: Fix the errors and try to open the database:

--normal redo log groups
alter database clear unarchived logfile group YYY;

--standby redo log groups
alter database clear unarchived logfile group ZZZ;
alter database drop logfile group ZZZ;

Is not enough. Looking on the database alert log file we can see :

LGWR: Primary database is in MAXIMUM AVAILABILITY mode 
LGWR: Destination LOG_ARCHIVE_DEST_2 is not serviced by LGWR 
LGWR: Destination LOG_ARCHIVE_DEST_1 is not serviced by LGWR 

Errors in file /<TRACE_DESTINATION>_lgwr_1827.trc: 
ORA-16072: a minimum of one standby database destination is required 
LGWR: terminating instance due to error 16072 
Instance terminated by LGWR, pid = 1827

Step5: Complete the opening procedure:

alter system set log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';
alter database set standby database to maximize performance;

SQL> select name,open_mode,protection_mode from v$database;

NAME      OPEN_MODE            PROTECTION_MODE
--------- -------------------- --------------------
NAME      MOUNTED              MAXIMUM PERFORMANCE

SQL> alter database flashback on;

Database altered.

SQL> alter database open;

Database altered.

SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
NAME      NAME_UNIQUE                    PRIMARY

Cet article Create a primary database using the backup of a standby database on 12cR2 est apparu en premier sur Blog dbi services.

Documentum – MigrationUtil – 3 – Change Server Config Name

Yann Neuhaus - Thu, 2019-02-21 02:00

In the previous blog I changed the Docbase Name to repository1 instead of RepoTemplate using MigrationUtil, in this blog it is Server Config Name’s turn to be changed.

In general, the repository name and the server config name are the same except in High availability case.
You can find the Server Config Name in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ cat $DOCUMENTUM/dba/config/repository1/server.ini
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = DCTMDB
...
1. Migration preparation

To change the server config name to repository1, you need first to update the configuration file of MigrationUtil, like below:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/config.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>Database connection details</comment>
<entry key="dbms">oracle</entry> <!-- This would be either sqlserver, oracle, db2 or postgres -->
<entry key="tgt_database_server">vmtestdctm01</entry> <!-- Database Server host or IP -->
<entry key="port_number">1521</entry> <!-- Database port number -->
<entry key="InstallOwnerPassword">install164</entry>
<entry key="isRCS">no</entry>    <!-- set it to yes, when running the utility on secondary CS -->

<!-- <comment>List of docbases in the machine</comment> -->
<entry key="DocbaseName.1">repository1</entry>

<!-- <comment>docbase owner password</comment> -->
<entry key="DocbasePassword.1">install164</entry>

...

<entry key="ChangeServerName">yes</entry>
<entry key="NewServerName.1">repository1</entry>

Put all other entry to no.
The tool will use above information, and load more from the server.ini file.

2. Execute the migration

Use the below script to execute the migration:

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh
#!/bin/sh
CLASSPATH=${CLASSPATH}:MigrationUtil.jar
export CLASSPATH
java -cp "${CLASSPATH}" MigrationUtil

Update it if you need to overload the CLASSPATH only during migration.

2.a Stop the Docbase and the DocBroker

$DOCUMENTUM/dba/dm_shutdown_repository1
$DOCUMENTUM/dba/dm_stop_DocBroker

2.b Update the database name in the server.ini file
Like during the Docbase Name change, it is a workaround to avoid below error:

...
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/DCTMDB
ERROR...Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor

Check the tnsnames.ora and note the service name, in my case is dctmdb.local.

[dmadmin@vmtestdctm01 ~]$ cat $ORACLE_HOME/network/admin/tnsnames.ora 
DCTMDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = vmtestdctm01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = dctmdb.local)
    )
  )

Make the change in the server.ini file:

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = RepoTemplate
database_conn = dctmdb.local
...

2.c Execute the migration script

[dmadmin@vmtestdctm01 ~]$ $DM_HOME/install/external_apps/MigrationUtil/MigrationUtil.sh

Welcome... Migration Utility invoked.
 
Skipping Docbase ID Change...
Skipping Host Name Change...
Skipping Install Owner Change...

Created log File: /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Changing Server Name...
Database owner password is read from config.xml
Finished changing Server Name...

Skipping Docbase Name Change...
Skipping Docker Seamless Upgrade scenario...

Migration Utility completed.

All changes have been recorded in the log file:

[dmadmin@vmtestdctm01 ~]$ cat /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange.log
Start: 2019-02-02 19:55:52.531
Changing Server Name
=====================

DocbaseName: repository1
Retrieving server.ini path for docbase: repository1
Found path: /app/dctm/product/16.4/dba/config/repository1/server.ini
ServerName: RepoTemplate
New ServerName: repository1

Database Details:
Database Vendor:oracle
Database Name:dctmdb.local
Databse User:RepoTemplate
Database URL:jdbc:oracle:thin:@vmtestdctm01:1521/dctmdb.local
Successfully connected to database....

Validating Server name with existing servers...
select object_name from dm_sysobject_s where r_object_type = 'dm_server_config'

Processing Database Changes...
Created database backup File '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_server_config' and object_name = 'RepoTemplate'
update dm_sysobject_s set object_name = 'repository1' where r_object_id = '3d0f449880000102'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_jms_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.repository1' where r_object_id = '080f4498800010a9'
select r_object_id,object_name from dm_sysobject_s where r_object_type = 'dm_cont_transfer_config' and object_name like '%repository1.RepoTemplate%'
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.repository1' where r_object_id = '080f4498800004ba'
select r_object_id,target_server from dm_job_s where target_server like '%repository1.RepoTemplate%'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800010d3'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035e'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000035f'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000360'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000361'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000362'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000363'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000364'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000365'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000366'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000367'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000372'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000373'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000374'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000375'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000376'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000377'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000378'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000379'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037a'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f44988000037b'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000386'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000387'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000388'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000389'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000e42'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000cb1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d02'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d04'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f449880000d05'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003db'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dc'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003dd'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003de'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003df'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e0'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e1'
update dm_job_s set target_server = 'repository1.repository1@vmtestdctm01' where r_object_id = '080f4498800003e2'
Successfully updated database values...

Processing File changes...
Backed up '/app/dctm/product/16.4/dba/config/repository1/server.ini' to '/app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup'
Updated server.ini file:/app/dctm/product/16.4/dba/config/repository1/server.ini
Backed up '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties' to '/app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup'
Updated acs.properties: /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
Finished processing File changes...
Finished changing server name 'repository1'

Processing startup and shutdown scripts...
Backed up '/app/dctm/product/16.4/dba/dm_start_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup'
Updated dm_startup script.
Backed up '/app/dctm/product/16.4/dba/dm_shutdown_repository1' to '/app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup'
Updated dm_shutdown script.

Finished changing server name....
End: 2019-02-02 19:55:54.687

2.d Reset the value of database_conn in the server.ini file

[dmadmin@vmtestdctm01 ~]$ vi $DOCUMENTUM/dba/config/repository1/server.ini
...
[SERVER_STARTUP]
docbase_id = 1000600
docbase_name = repository1
server_config_name = repository1
database_conn = DCTMDB
...
3. Check after update

Start the Docbroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the log to be sure that the repository has been started correctly. Notice that the log name has been changed from RepoTemplate.log to repository1.log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/repository1.log
...
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:09.807613	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29345, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:00:10.809686	29293[29293]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 29362, session 010f44988000000c) is started sucessfully."
4. Manual rollback is possible?

In fact, in the MigrationUtilLogs folder, you can find logs, backup of start/stop scripts, and also the sql file for manual rollback:

[dmadmin@vmtestdctm01 ~]$ ls -rtl $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
total 980
-rw-rw-r-- 1 dmadmin dmadmin   4323 Feb  2 19:55 ServerNameChange_DatabaseRestore.sql
-rwxrw-r-- 1 dmadmin dmadmin   2687 Feb  2 19:55 dm_start_repository1_server_RepoTemplate.backup
-rwxrw-r-- 1 dmadmin dmadmin   3623 Feb  2 19:55 dm_shutdown_repository1_server_RepoTemplate.backup
-rw-rw-r-- 1 dmadmin dmadmin   6901 Feb  2 19:55 ServerNameChange.log

lets see the content of the sql file :

[dmadmin@vmtestdctm01 ~]$ cat $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs/ServerNameChange_DatabaseRestore.sql
update dm_sysobject_s set object_name = 'RepoTemplate' where r_object_id = '3d0f449880000102';
update dm_sysobject_s set object_name = 'JMS vmtestdctm01:9080 for repository1.RepoTemplate' where r_object_id = '080f4498800010a9';
update dm_sysobject_s set object_name = 'ContTransferConfig_repository1.RepoTemplate' where r_object_id = '080f4498800004ba';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f4498800010d3';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035e';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f44988000035f';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000360';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000361';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000362';
update dm_job_s set target_server = 'repository1.RepoTemplate@vmtestdctm01' where r_object_id = '080f449880000363';
...

I already noticed that a manual rollback is possible after Docbase ID and Docbase Name change but I didn’t test it… I would like to try this one.
So to rollback:
Stop the Docbase and the Docbroker

$DOCUMENTUM/dba/dm_shutdown_RepoTemplate
$DOCUMENTUM/dba/dm_stop_DocBroker

Execute the sql

[dmadmin@vmtestdctm01 ~]$ cd $DM_HOME/install/external_apps/MigrationUtil/MigrationUtilLogs
[dmadmin@vmtestdctm01 MigrationUtilLogs]$ sqlplus /nolog
SQL*Plus: Release 12.1.0.2.0 Production on Sun Feb 17 19:53:12 2019
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

SQL> conn RepoTemplate@DCTMDB
Enter password: 
Connected.
SQL> @ServerNameChange_DatabaseRestore.sql
1 row updated.
1 row updated.
1 row updated.
...

The DB User is still RepoTemplate, it hasn’t been changed when I changed the docbase name

Copy back the files saved, you can find the list of files updated and saved in the log:

cp /app/dctm/product/16.4/dba/config/repository1/server.ini_server_RepoTemplate.backup /app/dctm/product/16.4/dba/config/repository1/server.ini
cp /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties_server_RepoTemplate.backup /app/dctm/product/16.4/wildfly9.0.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/config/acs.properties
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_start_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_start_repository1
cp /app/dctm/product/16.4/product/16.4/install/external_apps/MigrationUtil/MigrationUtilLogs/dm_shutdown_repository1_server_RepoTemplate.backup /app/dctm/product/16.4/dba/dm_shutdown_repository1

Think about changing back the the database connection in /app/dctm/product/16.4/dba/config/repository1/server.ini (see 2.d step).

Then start the DocBroker and the Docbase:

$DOCUMENTUM/dba/dm_launch_DocBroker
$DOCUMENTUM/dba/dm_start_repository1

Check the repository log:

[dmadmin@vmtestdctm01 ~]$ tail -5 $DOCUMENTUM/dba/log/RepoTemplate.log
...
2019-02-02T20:15:59.677595	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19232, session 010f44988000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:00.679566	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19243, session 010f44988000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2019-02-02T20:16:01.680888	19200[19200]	0000000000000000	[DM_WORKFLOW_I_AGENT_START]info:  "Workflow agent worker (pid : 19255, session 010f44988000000c) is started sucessfully."

Yes, the rollback works correctly! :D Despite this, I hope you will not have to do it on a production environment. ;)

Cet article Documentum – MigrationUtil – 3 – Change Server Config Name est apparu en premier sur Blog dbi services.

Please help to understand how Nested Table cardinality estimation works in Oracle 12C

Tom Kyte - Wed, 2019-02-20 18:26
Hi Team, Request your help with one issue that we are facing. We pass a Nested table as a variable to a select statement. Example SQL is: <code>SELECT CAST ( MULTISET ( SELECT DEPTNBR FROM DEPT ...
Categories: DBA Blogs

Presentation of function result, which is own-type table via SELECT <func> FROM DUAL in sql developer.

Tom Kyte - Wed, 2019-02-20 18:26
Hi TOM, I've created a function, that granting access to tables and views in given schema to given user. In result, function returns a own-type table, that contains prepared statement and exception message, if thrown. 1. Creating types: <code...
Categories: DBA Blogs

With clause in distributed transactions

Tom Kyte - Wed, 2019-02-20 18:26
Hi Tom ! As there is put a restriction on GTTs: Distributed transactions are not supported for temporary tables does that mean that inline views in a query, i.e. using WITH clause, but those with MATERIALIZED hint will not work properly...
Categories: DBA Blogs

opatchauto is not that dumb

Michael Dinh - Wed, 2019-02-20 17:41

I find it ironic that we want to automate yet fear automation.

Per documentation, ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1),
the following patch is required to implement ACFS:

p22810422_12102160419forACFS_Linux-x86-64.zip
Patch for Bug# 22810422
UEKR4 SUPPORT FOR ACFS(Patch 22810422)

I had inquired why opatchauto is not used to patch the entire system versus manual patching for GI ONLY.

To patch GI home and all Oracle RAC database homes of the same version:
# opatchauto apply _UNZIPPED_PATCH_LOCATION_/22810422 -ocmrf _ocm response file_

OCM is not included in OPatch binaries since OPatch version 12.2.0.1.5; therefore, -ocmrf is not needed.
Reason for GI only patching is simply because we only need to do it to enable ACFS support.

Rationale makes sense and typically ACFS is only applied to GI home.

Being curious, shouldn’t opatchauto know what homes to apply the patch where applicable?
Wouldn’t be easier to execute opatchauto versus performing manual steps?

What do you think and which approach would you use?

Here are the results from applying patch 22810422.


[oracle@racnode-dc1-2 22810422]$ pwd
/sf_OracleSoftware/22810422

[oracle@racnode-dc1-2 22810422]$ sudo su -
Last login: Wed Feb 20 23:13:52 CET 2019 on pts/0

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:21:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-21-53PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-22-12PM.log
The id for this session is YBG6

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-22-24PM_1.log



OPatchauto session completed at Wed Feb 20 23:24:53 2019
Time taken to complete the session 3 minutes, 7 seconds


[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:25:12 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-25-19PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-25-38PM.log
The id for this session is 3BYS

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-28-00PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-36-59PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-30-39PM_1.log



OPatchauto session completed at Wed Feb 20 23:40:10 2019
Time taken to complete the session 14 minutes, 58 seconds
[root@racnode-dc1-2 ~]#

[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-2 ~]$

====================================================================================================

### Checking resources while patching racnode-dc1-1
[oracle@racnode-dc1-2 ~]$ crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.asm
               ONLINE  ONLINE       racnode-dc1-2            Started,STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-2            169.254.178.60 172.1
                                                             6.9.11,STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-2 ~]$


====================================================================================================

[oracle@racnode-dc1-1 ~]$ sudo su -
Last login: Wed Feb 20 23:02:19 CET 2019 on pts/0

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"

[root@racnode-dc1-1 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:43:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-43-54PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-44-12PM.log
The id for this session is M9KF

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-44-26PM_1.log



OPatchauto session completed at Wed Feb 20 23:46:31 2019
Time taken to complete the session 2 minutes, 45 seconds

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:47:13 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-47-20PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-47-38PM.log
The id for this session is RHMR

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-50-01PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-58-57PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-52-37PM_1.log



OPatchauto session completed at Thu Feb 21 00:01:15 2019
Time taken to complete the session 14 minutes, 3 seconds

[root@racnode-dc1-1 ~]# logout

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-1 ~]$

Pages

Subscribe to Oracle FAQ aggregator